=== RUN TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run: out/minikube-linux-arm64 start -p old-k8s-version-069806 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-069806 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m17.756626636s)
-- stdout --
* [old-k8s-version-069806] minikube v1.33.1 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=19282
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/19282-720845/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-720845/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
* Using the docker driver based on existing profile
* Starting "old-k8s-version-069806" primary control-plane node in "old-k8s-version-069806" cluster
* Pulling base image v0.0.44-1721234491-19282 ...
* Restarting existing docker container for "old-k8s-version-069806" ...
* Preparing Kubernetes v1.20.0 on containerd 1.7.19 ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
- Using image docker.io/kubernetesui/dashboard:v2.7.0
- Using image registry.k8s.io/echoserver:1.4
- Using image fake.domain/registry.k8s.io/echoserver:1.4
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-069806 addons enable metrics-server
* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
-- /stdout --
** stderr **
I0717 20:24:22.361435 933934 out.go:291] Setting OutFile to fd 1 ...
I0717 20:24:22.361600 933934 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 20:24:22.361611 933934 out.go:304] Setting ErrFile to fd 2...
I0717 20:24:22.361616 933934 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 20:24:22.361859 933934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-720845/.minikube/bin
I0717 20:24:22.362242 933934 out.go:298] Setting JSON to false
I0717 20:24:22.363330 933934 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":14810,"bootTime":1721233052,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1064-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
I0717 20:24:22.363408 933934 start.go:139] virtualization:
I0717 20:24:22.365785 933934 out.go:177] * [old-k8s-version-069806] minikube v1.33.1 on Ubuntu 20.04 (arm64)
I0717 20:24:22.367675 933934 out.go:177] - MINIKUBE_LOCATION=19282
I0717 20:24:22.367725 933934 notify.go:220] Checking for updates...
I0717 20:24:22.372387 933934 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0717 20:24:22.374035 933934 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19282-720845/kubeconfig
I0717 20:24:22.375652 933934 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-720845/.minikube
I0717 20:24:22.377107 933934 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0717 20:24:22.378675 933934 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0717 20:24:22.381168 933934 config.go:182] Loaded profile config "old-k8s-version-069806": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0717 20:24:22.383299 933934 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
I0717 20:24:22.384722 933934 driver.go:392] Setting default libvirt URI to qemu:///system
I0717 20:24:22.411139 933934 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
I0717 20:24:22.411269 933934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0717 20:24:22.497654 933934 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:68 SystemTime:2024-07-17 20:24:22.488092151 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
I0717 20:24:22.497770 933934 docker.go:307] overlay module found
I0717 20:24:22.499856 933934 out.go:177] * Using the docker driver based on existing profile
I0717 20:24:22.501605 933934 start.go:297] selected driver: docker
I0717 20:24:22.501629 933934 start.go:901] validating driver "docker" against &{Name:old-k8s-version-069806 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-069806 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0717 20:24:22.501757 933934 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0717 20:24:22.502384 933934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0717 20:24:22.616503 933934 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:68 SystemTime:2024-07-17 20:24:22.587988704 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
I0717 20:24:22.616862 933934 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0717 20:24:22.616929 933934 cni.go:84] Creating CNI manager for ""
I0717 20:24:22.616945 933934 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0717 20:24:22.616997 933934 start.go:340] cluster config:
{Name:old-k8s-version-069806 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-069806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0717 20:24:22.618991 933934 out.go:177] * Starting "old-k8s-version-069806" primary control-plane node in "old-k8s-version-069806" cluster
I0717 20:24:22.620774 933934 cache.go:121] Beginning downloading kic base image for docker with containerd
I0717 20:24:22.622719 933934 out.go:177] * Pulling base image v0.0.44-1721234491-19282 ...
I0717 20:24:22.624361 933934 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0717 20:24:22.624428 933934 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19282-720845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
I0717 20:24:22.624463 933934 cache.go:56] Caching tarball of preloaded images
I0717 20:24:22.624559 933934 preload.go:172] Found /home/jenkins/minikube-integration/19282-720845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0717 20:24:22.624574 933934 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
I0717 20:24:22.624691 933934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/old-k8s-version-069806/config.json ...
I0717 20:24:22.624915 933934 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 in local docker daemon
W0717 20:24:22.657056 933934 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 is of wrong architecture
I0717 20:24:22.657079 933934 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 to local cache
I0717 20:24:22.657161 933934 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 in local cache directory
I0717 20:24:22.657178 933934 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 in local cache directory, skipping pull
I0717 20:24:22.657182 933934 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 exists in cache, skipping pull
I0717 20:24:22.657191 933934 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 as a tarball
I0717 20:24:22.657196 933934 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 from local cache
I0717 20:24:22.782332 933934 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 from cached tarball
I0717 20:24:22.782373 933934 cache.go:194] Successfully downloaded all kic artifacts
I0717 20:24:22.782413 933934 start.go:360] acquireMachinesLock for old-k8s-version-069806: {Name:mkda59be69caa725b78e8ee12ffcee3804c6aa24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0717 20:24:22.782483 933934 start.go:364] duration metric: took 45.218µs to acquireMachinesLock for "old-k8s-version-069806"
I0717 20:24:22.782503 933934 start.go:96] Skipping create...Using existing machine configuration
I0717 20:24:22.782508 933934 fix.go:54] fixHost starting:
I0717 20:24:22.782806 933934 cli_runner.go:164] Run: docker container inspect old-k8s-version-069806 --format={{.State.Status}}
I0717 20:24:22.799554 933934 fix.go:112] recreateIfNeeded on old-k8s-version-069806: state=Stopped err=<nil>
W0717 20:24:22.799594 933934 fix.go:138] unexpected machine state, will restart: <nil>
I0717 20:24:22.803493 933934 out.go:177] * Restarting existing docker container for "old-k8s-version-069806" ...
I0717 20:24:22.805480 933934 cli_runner.go:164] Run: docker start old-k8s-version-069806
I0717 20:24:23.161539 933934 cli_runner.go:164] Run: docker container inspect old-k8s-version-069806 --format={{.State.Status}}
I0717 20:24:23.181855 933934 kic.go:430] container "old-k8s-version-069806" state is running.
I0717 20:24:23.182255 933934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-069806
I0717 20:24:23.203209 933934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/old-k8s-version-069806/config.json ...
I0717 20:24:23.203529 933934 machine.go:94] provisionDockerMachine start ...
I0717 20:24:23.203607 933934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-069806
I0717 20:24:23.234055 933934 main.go:141] libmachine: Using SSH client type: native
I0717 20:24:23.234349 933934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2c70] 0x3e54d0 <nil> [] 0s} 127.0.0.1 33824 <nil> <nil>}
I0717 20:24:23.234358 933934 main.go:141] libmachine: About to run SSH command:
hostname
I0717 20:24:23.235020 933934 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I0717 20:24:26.371546 933934 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-069806
I0717 20:24:26.371574 933934 ubuntu.go:169] provisioning hostname "old-k8s-version-069806"
I0717 20:24:26.371647 933934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-069806
I0717 20:24:26.391627 933934 main.go:141] libmachine: Using SSH client type: native
I0717 20:24:26.391869 933934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2c70] 0x3e54d0 <nil> [] 0s} 127.0.0.1 33824 <nil> <nil>}
I0717 20:24:26.391881 933934 main.go:141] libmachine: About to run SSH command:
sudo hostname old-k8s-version-069806 && echo "old-k8s-version-069806" | sudo tee /etc/hostname
I0717 20:24:26.543007 933934 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-069806
I0717 20:24:26.543157 933934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-069806
I0717 20:24:26.569673 933934 main.go:141] libmachine: Using SSH client type: native
I0717 20:24:26.569924 933934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2c70] 0x3e54d0 <nil> [] 0s} 127.0.0.1 33824 <nil> <nil>}
I0717 20:24:26.569941 933934 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-069806' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-069806/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-069806' | sudo tee -a /etc/hosts;
fi
fi
I0717 20:24:26.708266 933934 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0717 20:24:26.708343 933934 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19282-720845/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-720845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-720845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-720845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-720845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-720845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-720845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-720845/.minikube}
I0717 20:24:26.708388 933934 ubuntu.go:177] setting up certificates
I0717 20:24:26.708425 933934 provision.go:84] configureAuth start
I0717 20:24:26.708522 933934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-069806
I0717 20:24:26.732363 933934 provision.go:143] copyHostCerts
I0717 20:24:26.732427 933934 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-720845/.minikube/ca.pem, removing ...
I0717 20:24:26.732436 933934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-720845/.minikube/ca.pem
I0717 20:24:26.732508 933934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-720845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-720845/.minikube/ca.pem (1078 bytes)
I0717 20:24:26.732605 933934 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-720845/.minikube/cert.pem, removing ...
I0717 20:24:26.732611 933934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-720845/.minikube/cert.pem
I0717 20:24:26.732643 933934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-720845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-720845/.minikube/cert.pem (1123 bytes)
I0717 20:24:26.732693 933934 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-720845/.minikube/key.pem, removing ...
I0717 20:24:26.732698 933934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-720845/.minikube/key.pem
I0717 20:24:26.732724 933934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-720845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-720845/.minikube/key.pem (1679 bytes)
I0717 20:24:26.732768 933934 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-720845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-720845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-720845/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-069806 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-069806]
I0717 20:24:27.112510 933934 provision.go:177] copyRemoteCerts
I0717 20:24:27.112625 933934 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0717 20:24:27.112701 933934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-069806
I0717 20:24:27.132540 933934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/19282-720845/.minikube/machines/old-k8s-version-069806/id_rsa Username:docker}
I0717 20:24:27.227046 933934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-720845/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I0717 20:24:27.252833 933934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-720845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0717 20:24:27.278711 933934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-720845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0717 20:24:27.310082 933934 provision.go:87] duration metric: took 601.622597ms to configureAuth
I0717 20:24:27.310111 933934 ubuntu.go:193] setting minikube options for container-runtime
I0717 20:24:27.310307 933934 config.go:182] Loaded profile config "old-k8s-version-069806": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0717 20:24:27.310319 933934 machine.go:97] duration metric: took 4.106770581s to provisionDockerMachine
I0717 20:24:27.310336 933934 start.go:293] postStartSetup for "old-k8s-version-069806" (driver="docker")
I0717 20:24:27.310350 933934 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0717 20:24:27.310410 933934 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0717 20:24:27.310454 933934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-069806
I0717 20:24:27.329494 933934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/19282-720845/.minikube/machines/old-k8s-version-069806/id_rsa Username:docker}
I0717 20:24:27.432906 933934 ssh_runner.go:195] Run: cat /etc/os-release
I0717 20:24:27.440673 933934 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0717 20:24:27.440762 933934 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0717 20:24:27.440788 933934 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0717 20:24:27.440834 933934 info.go:137] Remote host: Ubuntu 22.04.4 LTS
I0717 20:24:27.440864 933934 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-720845/.minikube/addons for local assets ...
I0717 20:24:27.440964 933934 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-720845/.minikube/files for local assets ...
I0717 20:24:27.441098 933934 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-720845/.minikube/files/etc/ssl/certs/7262252.pem -> 7262252.pem in /etc/ssl/certs
I0717 20:24:27.441254 933934 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0717 20:24:27.451715 933934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-720845/.minikube/files/etc/ssl/certs/7262252.pem --> /etc/ssl/certs/7262252.pem (1708 bytes)
I0717 20:24:27.486719 933934 start.go:296] duration metric: took 176.364021ms for postStartSetup
I0717 20:24:27.486847 933934 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0717 20:24:27.486944 933934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-069806
I0717 20:24:27.506408 933934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/19282-720845/.minikube/machines/old-k8s-version-069806/id_rsa Username:docker}
I0717 20:24:27.601844 933934 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0717 20:24:27.606920 933934 fix.go:56] duration metric: took 4.824402733s for fixHost
I0717 20:24:27.606945 933934 start.go:83] releasing machines lock for "old-k8s-version-069806", held for 4.82445471s
I0717 20:24:27.607017 933934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-069806
I0717 20:24:27.626375 933934 ssh_runner.go:195] Run: cat /version.json
I0717 20:24:27.626438 933934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-069806
I0717 20:24:27.626669 933934 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0717 20:24:27.626734 933934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-069806
I0717 20:24:27.654129 933934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/19282-720845/.minikube/machines/old-k8s-version-069806/id_rsa Username:docker}
I0717 20:24:27.665946 933934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/19282-720845/.minikube/machines/old-k8s-version-069806/id_rsa Username:docker}
I0717 20:24:27.896216 933934 ssh_runner.go:195] Run: systemctl --version
I0717 20:24:27.900756 933934 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0717 20:24:27.905260 933934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0717 20:24:27.932174 933934 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0717 20:24:27.932258 933934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0717 20:24:27.941298 933934 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0717 20:24:27.941327 933934 start.go:495] detecting cgroup driver to use...
I0717 20:24:27.941361 933934 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0717 20:24:27.941415 933934 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0717 20:24:27.956583 933934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0717 20:24:27.968994 933934 docker.go:217] disabling cri-docker service (if available) ...
I0717 20:24:27.969066 933934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0717 20:24:27.982838 933934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0717 20:24:28.000460 933934 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0717 20:24:28.106593 933934 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0717 20:24:28.207017 933934 docker.go:233] disabling docker service ...
I0717 20:24:28.207085 933934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0717 20:24:28.221965 933934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0717 20:24:28.234818 933934 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0717 20:24:28.323811 933934 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0717 20:24:28.419756 933934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0717 20:24:28.433226 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0717 20:24:28.460878 933934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I0717 20:24:28.473310 933934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0717 20:24:28.484005 933934 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0717 20:24:28.484103 933934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0717 20:24:28.502728 933934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0717 20:24:28.517779 933934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0717 20:24:28.529870 933934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0717 20:24:28.540776 933934 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0717 20:24:28.551249 933934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0717 20:24:28.562393 933934 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0717 20:24:28.571958 933934 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0717 20:24:28.581134 933934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0717 20:24:28.690986 933934 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0717 20:24:28.884812 933934 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0717 20:24:28.884896 933934 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0717 20:24:28.888960 933934 start.go:563] Will wait 60s for crictl version
I0717 20:24:28.889027 933934 ssh_runner.go:195] Run: which crictl
I0717 20:24:28.893691 933934 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0717 20:24:28.939074 933934 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.19
RuntimeApiVersion: v1
I0717 20:24:28.939155 933934 ssh_runner.go:195] Run: containerd --version
I0717 20:24:28.966444 933934 ssh_runner.go:195] Run: containerd --version
I0717 20:24:28.991702 933934 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.19 ...
I0717 20:24:28.993495 933934 cli_runner.go:164] Run: docker network inspect old-k8s-version-069806 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0717 20:24:29.011591 933934 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I0717 20:24:29.015776 933934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0717 20:24:29.027761 933934 kubeadm.go:883] updating cluster {Name:old-k8s-version-069806 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-069806 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0717 20:24:29.027894 933934 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0717 20:24:29.027969 933934 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 20:24:29.070038 933934 containerd.go:627] all images are preloaded for containerd runtime.
I0717 20:24:29.070064 933934 containerd.go:534] Images already preloaded, skipping extraction
I0717 20:24:29.070124 933934 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 20:24:29.107810 933934 containerd.go:627] all images are preloaded for containerd runtime.
I0717 20:24:29.107836 933934 cache_images.go:84] Images are preloaded, skipping loading
I0717 20:24:29.107845 933934 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
I0717 20:24:29.107965 933934 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-069806 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-069806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0717 20:24:29.108038 933934 ssh_runner.go:195] Run: sudo crictl info
I0717 20:24:29.155340 933934 cni.go:84] Creating CNI manager for ""
I0717 20:24:29.155364 933934 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0717 20:24:29.155374 933934 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0717 20:24:29.155394 933934 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-069806 NodeName:old-k8s-version-069806 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0717 20:24:29.155540 933934 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "old-k8s-version-069806"
kubeletExtraArgs:
node-ip: 192.168.76.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0717 20:24:29.155626 933934 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I0717 20:24:29.165912 933934 binaries.go:44] Found k8s binaries, skipping transfer
I0717 20:24:29.166018 933934 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0717 20:24:29.176908 933934 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
I0717 20:24:29.197452 933934 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0717 20:24:29.217867 933934 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
I0717 20:24:29.236927 933934 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I0717 20:24:29.241021 933934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0717 20:24:29.252344 933934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0717 20:24:29.344067 933934 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0717 20:24:29.358767 933934 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/old-k8s-version-069806 for IP: 192.168.76.2
I0717 20:24:29.358789 933934 certs.go:194] generating shared ca certs ...
I0717 20:24:29.358805 933934 certs.go:226] acquiring lock for ca certs: {Name:mk70fd46ee08fce14a9e7548fea7cc8fad7ae6a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 20:24:29.358939 933934 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-720845/.minikube/ca.key
I0717 20:24:29.358981 933934 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-720845/.minikube/proxy-client-ca.key
I0717 20:24:29.358988 933934 certs.go:256] generating profile certs ...
I0717 20:24:29.359076 933934 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/old-k8s-version-069806/client.key
I0717 20:24:29.359137 933934 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/old-k8s-version-069806/apiserver.key.a9b625be
I0717 20:24:29.359185 933934 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/old-k8s-version-069806/proxy-client.key
I0717 20:24:29.359294 933934 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-720845/.minikube/certs/726225.pem (1338 bytes)
W0717 20:24:29.359324 933934 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-720845/.minikube/certs/726225_empty.pem, impossibly tiny 0 bytes
I0717 20:24:29.359332 933934 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-720845/.minikube/certs/ca-key.pem (1675 bytes)
I0717 20:24:29.359358 933934 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-720845/.minikube/certs/ca.pem (1078 bytes)
I0717 20:24:29.359381 933934 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-720845/.minikube/certs/cert.pem (1123 bytes)
I0717 20:24:29.359404 933934 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-720845/.minikube/certs/key.pem (1679 bytes)
I0717 20:24:29.359445 933934 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-720845/.minikube/files/etc/ssl/certs/7262252.pem (1708 bytes)
I0717 20:24:29.360168 933934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-720845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0717 20:24:29.394896 933934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-720845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0717 20:24:29.429187 933934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-720845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0717 20:24:29.457525 933934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-720845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0717 20:24:29.486295 933934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/old-k8s-version-069806/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
I0717 20:24:29.512168 933934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/old-k8s-version-069806/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0717 20:24:29.541302 933934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/old-k8s-version-069806/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0717 20:24:29.570429 933934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/old-k8s-version-069806/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0717 20:24:29.598000 933934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-720845/.minikube/files/etc/ssl/certs/7262252.pem --> /usr/share/ca-certificates/7262252.pem (1708 bytes)
I0717 20:24:29.622949 933934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-720845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0717 20:24:29.649050 933934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-720845/.minikube/certs/726225.pem --> /usr/share/ca-certificates/726225.pem (1338 bytes)
I0717 20:24:29.675970 933934 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0717 20:24:29.695279 933934 ssh_runner.go:195] Run: openssl version
I0717 20:24:29.702102 933934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7262252.pem && ln -fs /usr/share/ca-certificates/7262252.pem /etc/ssl/certs/7262252.pem"
I0717 20:24:29.711668 933934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7262252.pem
I0717 20:24:29.715536 933934 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 19:43 /usr/share/ca-certificates/7262252.pem
I0717 20:24:29.715614 933934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7262252.pem
I0717 20:24:29.722676 933934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7262252.pem /etc/ssl/certs/3ec20f2e.0"
I0717 20:24:29.731892 933934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0717 20:24:29.741654 933934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0717 20:24:29.745305 933934 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 19:36 /usr/share/ca-certificates/minikubeCA.pem
I0717 20:24:29.745419 933934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0717 20:24:29.752421 933934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0717 20:24:29.761728 933934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/726225.pem && ln -fs /usr/share/ca-certificates/726225.pem /etc/ssl/certs/726225.pem"
I0717 20:24:29.772472 933934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/726225.pem
I0717 20:24:29.776319 933934 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 19:43 /usr/share/ca-certificates/726225.pem
I0717 20:24:29.776415 933934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/726225.pem
I0717 20:24:29.783551 933934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/726225.pem /etc/ssl/certs/51391683.0"
I0717 20:24:29.792880 933934 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0717 20:24:29.796571 933934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0717 20:24:29.803343 933934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0717 20:24:29.810364 933934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0717 20:24:29.817376 933934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0717 20:24:29.824211 933934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0717 20:24:29.831349 933934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0717 20:24:29.838320 933934 kubeadm.go:392] StartCluster: {Name:old-k8s-version-069806 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-069806 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0717 20:24:29.838437 933934 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0717 20:24:29.838505 933934 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0717 20:24:29.883620 933934 cri.go:89] found id: "cedac39d388652063b8130953072e79d032964b2a68154d6caf6d05d410673da"
I0717 20:24:29.883645 933934 cri.go:89] found id: "03f122fd4abc96b4162eaa9b0110afa04333b847fc74cafe0a68ca5d30990294"
I0717 20:24:29.883651 933934 cri.go:89] found id: "62bdf5d2d244d2f8c5398ebc927cf85a2d9e82aa5dae832ea5ce7dd7bf6e863b"
I0717 20:24:29.883655 933934 cri.go:89] found id: "0a4ebe1729293ed43248c2cc5f011a443aa3d9aeac69ec4546c5c944f657b6dd"
I0717 20:24:29.883666 933934 cri.go:89] found id: "3a29c0026d94090fe1242ea0503bd2f78ae5aa0004fecb1cea584a49f5ddc1f4"
I0717 20:24:29.883671 933934 cri.go:89] found id: "bcdcb609ea0d91709769d3d302be011074525901ec1c642b6b5f14c7eb1217ef"
I0717 20:24:29.883674 933934 cri.go:89] found id: "fdcb50d15d02756e4b375373cc89aee7407bf6f49813ed607f9a66e11be2c02f"
I0717 20:24:29.883677 933934 cri.go:89] found id: "e570d714588a47af78dc81d95545a2a3f0ac0ae3acb938112dc2a7230b0a49b1"
I0717 20:24:29.883680 933934 cri.go:89] found id: "f910fbeca2eebc637c9085897f65b8f1c51713acff4ad29cfb0075f1f3ec10bc"
I0717 20:24:29.883687 933934 cri.go:89] found id: ""
I0717 20:24:29.883739 933934 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
I0717 20:24:29.909404 933934 cri.go:116] JSON = null
W0717 20:24:29.909470 933934 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 9
I0717 20:24:29.909556 933934 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0717 20:24:29.920679 933934 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0717 20:24:29.920703 933934 kubeadm.go:593] restartPrimaryControlPlane start ...
I0717 20:24:29.920796 933934 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0717 20:24:29.932617 933934 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0717 20:24:29.933262 933934 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-069806" does not appear in /home/jenkins/minikube-integration/19282-720845/kubeconfig
I0717 20:24:29.933545 933934 kubeconfig.go:62] /home/jenkins/minikube-integration/19282-720845/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-069806" cluster setting kubeconfig missing "old-k8s-version-069806" context setting]
I0717 20:24:29.934053 933934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-720845/kubeconfig: {Name:mkbb7ab9923e54de6b296ef688e430b421215a90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 20:24:29.935839 933934 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0717 20:24:29.947611 933934 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
I0717 20:24:29.947647 933934 kubeadm.go:597] duration metric: took 26.938238ms to restartPrimaryControlPlane
I0717 20:24:29.947657 933934 kubeadm.go:394] duration metric: took 109.346701ms to StartCluster
I0717 20:24:29.947681 933934 settings.go:142] acquiring lock: {Name:mk16909496744bd83f0170452b855928c1fb4054 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 20:24:29.947753 933934 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19282-720845/kubeconfig
I0717 20:24:29.948850 933934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-720845/kubeconfig: {Name:mkbb7ab9923e54de6b296ef688e430b421215a90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 20:24:29.949084 933934 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0717 20:24:29.949499 933934 config.go:182] Loaded profile config "old-k8s-version-069806": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0717 20:24:29.949478 933934 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0717 20:24:29.949563 933934 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-069806"
I0717 20:24:29.949571 933934 addons.go:69] Setting dashboard=true in profile "old-k8s-version-069806"
I0717 20:24:29.949590 933934 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-069806"
I0717 20:24:29.949593 933934 addons.go:234] Setting addon dashboard=true in "old-k8s-version-069806"
W0717 20:24:29.949597 933934 addons.go:243] addon storage-provisioner should already be in state true
W0717 20:24:29.949600 933934 addons.go:243] addon dashboard should already be in state true
I0717 20:24:29.949622 933934 host.go:66] Checking if "old-k8s-version-069806" exists ...
I0717 20:24:29.949714 933934 host.go:66] Checking if "old-k8s-version-069806" exists ...
I0717 20:24:29.950048 933934 cli_runner.go:164] Run: docker container inspect old-k8s-version-069806 --format={{.State.Status}}
I0717 20:24:29.950215 933934 cli_runner.go:164] Run: docker container inspect old-k8s-version-069806 --format={{.State.Status}}
I0717 20:24:29.950677 933934 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-069806"
I0717 20:24:29.950714 933934 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-069806"
W0717 20:24:29.950721 933934 addons.go:243] addon metrics-server should already be in state true
I0717 20:24:29.950747 933934 host.go:66] Checking if "old-k8s-version-069806" exists ...
I0717 20:24:29.951157 933934 cli_runner.go:164] Run: docker container inspect old-k8s-version-069806 --format={{.State.Status}}
I0717 20:24:29.949565 933934 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-069806"
I0717 20:24:29.953466 933934 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-069806"
I0717 20:24:29.953764 933934 cli_runner.go:164] Run: docker container inspect old-k8s-version-069806 --format={{.State.Status}}
I0717 20:24:29.954957 933934 out.go:177] * Verifying Kubernetes components...
I0717 20:24:29.960508 933934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0717 20:24:30.010428 933934 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0717 20:24:30.011416 933934 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0717 20:24:30.012415 933934 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0717 20:24:30.012443 933934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0717 20:24:30.012520 933934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-069806
I0717 20:24:30.016331 933934 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-069806"
W0717 20:24:30.016362 933934 addons.go:243] addon default-storageclass should already be in state true
I0717 20:24:30.016393 933934 host.go:66] Checking if "old-k8s-version-069806" exists ...
I0717 20:24:30.018108 933934 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0717 20:24:30.018657 933934 cli_runner.go:164] Run: docker container inspect old-k8s-version-069806 --format={{.State.Status}}
I0717 20:24:30.026719 933934 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0717 20:24:30.026746 933934 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0717 20:24:30.026824 933934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-069806
I0717 20:24:30.058823 933934 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0717 20:24:30.061187 933934 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0717 20:24:30.061224 933934 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0717 20:24:30.061340 933934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-069806
I0717 20:24:30.083754 933934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/19282-720845/.minikube/machines/old-k8s-version-069806/id_rsa Username:docker}
I0717 20:24:30.103106 933934 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0717 20:24:30.103139 933934 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0717 20:24:30.103212 933934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-069806
I0717 20:24:30.146962 933934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/19282-720845/.minikube/machines/old-k8s-version-069806/id_rsa Username:docker}
I0717 20:24:30.155738 933934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/19282-720845/.minikube/machines/old-k8s-version-069806/id_rsa Username:docker}
I0717 20:24:30.194006 933934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/19282-720845/.minikube/machines/old-k8s-version-069806/id_rsa Username:docker}
I0717 20:24:30.203438 933934 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0717 20:24:30.239663 933934 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-069806" to be "Ready" ...
I0717 20:24:30.279914 933934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0717 20:24:30.308012 933934 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0717 20:24:30.308103 933934 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0717 20:24:30.343886 933934 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0717 20:24:30.343955 933934 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0717 20:24:30.350683 933934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0717 20:24:30.353435 933934 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0717 20:24:30.353463 933934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0717 20:24:30.391240 933934 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0717 20:24:30.391266 933934 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0717 20:24:30.411729 933934 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0717 20:24:30.411755 933934 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0717 20:24:30.446187 933934 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0717 20:24:30.446218 933934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
W0717 20:24:30.448955 933934 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:30.448988 933934 retry.go:31] will retry after 220.918466ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:30.474593 933934 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
I0717 20:24:30.474623 933934 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0717 20:24:30.483001 933934 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0717 20:24:30.483028 933934 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0717 20:24:30.511061 933934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0717 20:24:30.532408 933934 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:30.532450 933934 retry.go:31] will retry after 237.70842ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:30.537552 933934 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0717 20:24:30.537584 933934 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0717 20:24:30.558253 933934 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0717 20:24:30.558279 933934 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0717 20:24:30.578688 933934 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0717 20:24:30.578717 933934 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0717 20:24:30.602947 933934 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0717 20:24:30.602975 933934 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
W0717 20:24:30.620800 933934 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:30.620836 933934 retry.go:31] will retry after 294.344052ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:30.624569 933934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0717 20:24:30.670831 933934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0717 20:24:30.717086 933934 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:30.717194 933934 retry.go:31] will retry after 252.495536ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0717 20:24:30.767543 933934 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:30.767634 933934 retry.go:31] will retry after 356.263852ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:30.770872 933934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0717 20:24:30.845596 933934 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:30.845629 933934 retry.go:31] will retry after 335.890512ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:30.915784 933934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0717 20:24:30.969887 933934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0717 20:24:31.003615 933934 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:31.003659 933934 retry.go:31] will retry after 319.794296ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0717 20:24:31.066108 933934 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:31.066148 933934 retry.go:31] will retry after 376.760595ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:31.124303 933934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0717 20:24:31.181703 933934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0717 20:24:31.202722 933934 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:31.202920 933934 retry.go:31] will retry after 693.030919ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0717 20:24:31.264832 933934 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:31.264864 933934 retry.go:31] will retry after 652.00017ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:31.324023 933934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0717 20:24:31.395000 933934 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:31.395035 933934 retry.go:31] will retry after 319.844346ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:31.443151 933934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0717 20:24:31.522859 933934 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:31.522895 933934 retry.go:31] will retry after 550.426328ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:31.715121 933934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0717 20:24:31.793108 933934 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:31.793143 933934 retry.go:31] will retry after 470.462011ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:31.896218 933934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0717 20:24:31.917588 933934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0717 20:24:32.011273 933934 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:32.011308 933934 retry.go:31] will retry after 717.978064ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0717 20:24:32.025586 933934 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:32.025633 933934 retry.go:31] will retry after 688.036208ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:32.073833 933934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0717 20:24:32.156402 933934 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:32.156439 933934 retry.go:31] will retry after 804.849597ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:32.240928 933934 node_ready.go:53] error getting node "old-k8s-version-069806": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-069806": dial tcp 192.168.76.2:8443: connect: connection refused
I0717 20:24:32.264230 933934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0717 20:24:32.348914 933934 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:32.348948 933934 retry.go:31] will retry after 753.560302ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:32.714289 933934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0717 20:24:32.729614 933934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0717 20:24:32.814985 933934 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:32.815019 933934 retry.go:31] will retry after 1.70995206s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0717 20:24:32.835387 933934 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:32.835419 933934 retry.go:31] will retry after 1.033052501s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:32.961566 933934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0717 20:24:33.035846 933934 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:33.035883 933934 retry.go:31] will retry after 683.194987ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:33.103044 933934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0717 20:24:33.176414 933934 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:33.176446 933934 retry.go:31] will retry after 984.17656ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:33.719575 933934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0717 20:24:33.795856 933934 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:33.795903 933934 retry.go:31] will retry after 2.044995846s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:33.869209 933934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0717 20:24:33.949879 933934 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:33.949962 933934 retry.go:31] will retry after 1.905710864s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:34.161482 933934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0717 20:24:34.231265 933934 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:34.231299 933934 retry.go:31] will retry after 4.242109517s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:34.525198 933934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0717 20:24:34.614329 933934 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:34.614379 933934 retry.go:31] will retry after 2.2166745s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:34.740941 933934 node_ready.go:53] error getting node "old-k8s-version-069806": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-069806": dial tcp 192.168.76.2:8443: connect: connection refused
I0717 20:24:35.841183 933934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0717 20:24:35.856763 933934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0717 20:24:35.947506 933934 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:35.947563 933934 retry.go:31] will retry after 1.593813903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0717 20:24:35.956182 933934 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:35.956217 933934 retry.go:31] will retry after 3.644568339s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:36.741161 933934 node_ready.go:53] error getting node "old-k8s-version-069806": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-069806": dial tcp 192.168.76.2:8443: connect: connection refused
I0717 20:24:36.831300 933934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0717 20:24:36.903765 933934 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:36.903801 933934 retry.go:31] will retry after 3.217969809s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:37.542429 933934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0717 20:24:37.613029 933934 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:37.613064 933934 retry.go:31] will retry after 6.11181412s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:38.474153 933934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0717 20:24:38.663570 933934 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:38.663620 933934 retry.go:31] will retry after 5.57518338s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:24:39.600977 933934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0717 20:24:40.122878 933934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0717 20:24:43.725089 933934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0717 20:24:44.239027 933934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0717 20:24:47.113690 933934 node_ready.go:49] node "old-k8s-version-069806" has status "Ready":"True"
I0717 20:24:47.113716 933934 node_ready.go:38] duration metric: took 16.873996969s for node "old-k8s-version-069806" to be "Ready" ...
I0717 20:24:47.113727 933934 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0717 20:24:47.458277 933934 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-9djzb" in "kube-system" namespace to be "Ready" ...
I0717 20:24:47.588913 933934 pod_ready.go:92] pod "coredns-74ff55c5b-9djzb" in "kube-system" namespace has status "Ready":"True"
I0717 20:24:47.588994 933934 pod_ready.go:81] duration metric: took 130.626529ms for pod "coredns-74ff55c5b-9djzb" in "kube-system" namespace to be "Ready" ...
I0717 20:24:47.589022 933934 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-069806" in "kube-system" namespace to be "Ready" ...
I0717 20:24:47.637205 933934 pod_ready.go:92] pod "etcd-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"True"
I0717 20:24:47.637287 933934 pod_ready.go:81] duration metric: took 48.226749ms for pod "etcd-old-k8s-version-069806" in "kube-system" namespace to be "Ready" ...
I0717 20:24:47.637317 933934 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-069806" in "kube-system" namespace to be "Ready" ...
I0717 20:24:47.685841 933934 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"True"
I0717 20:24:47.685918 933934 pod_ready.go:81] duration metric: took 48.579175ms for pod "kube-apiserver-old-k8s-version-069806" in "kube-system" namespace to be "Ready" ...
I0717 20:24:47.685945 933934 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-069806" in "kube-system" namespace to be "Ready" ...
I0717 20:24:48.563209 933934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (8.440288721s)
I0717 20:24:48.565544 933934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.964523773s)
I0717 20:24:49.087493 933934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.848404431s)
I0717 20:24:49.087579 933934 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-069806"
I0717 20:24:49.087767 933934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.362632755s)
I0717 20:24:49.090298 933934 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-069806 addons enable metrics-server
I0717 20:24:49.092002 933934 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
I0717 20:24:49.093926 933934 addons.go:510] duration metric: took 19.144446544s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
I0717 20:24:49.691938 933934 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"False"
I0717 20:24:52.193749 933934 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"False"
I0717 20:24:54.692754 933934 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"False"
I0717 20:24:56.693229 933934 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"False"
I0717 20:24:59.193036 933934 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"False"
I0717 20:25:01.193937 933934 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"False"
I0717 20:25:03.195566 933934 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"False"
I0717 20:25:05.693761 933934 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"False"
I0717 20:25:08.193030 933934 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"False"
I0717 20:25:10.193723 933934 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"False"
I0717 20:25:12.693228 933934 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"False"
I0717 20:25:14.694495 933934 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"False"
I0717 20:25:17.192795 933934 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"False"
I0717 20:25:19.193419 933934 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"False"
I0717 20:25:21.694868 933934 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"False"
I0717 20:25:24.193154 933934 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"False"
I0717 20:25:26.692069 933934 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"False"
I0717 20:25:28.693733 933934 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"False"
I0717 20:25:31.193491 933934 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"False"
I0717 20:25:33.692656 933934 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"False"
I0717 20:25:36.193611 933934 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"False"
I0717 20:25:38.202362 933934 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"False"
I0717 20:25:40.694160 933934 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"False"
I0717 20:25:43.193212 933934 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"False"
I0717 20:25:45.195429 933934 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"False"
I0717 20:25:47.692458 933934 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"False"
I0717 20:25:49.692503 933934 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"False"
I0717 20:25:52.193357 933934 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"False"
I0717 20:25:54.692959 933934 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"False"
I0717 20:25:55.200201 933934 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"True"
I0717 20:25:55.200229 933934 pod_ready.go:81] duration metric: took 1m7.514262075s for pod "kube-controller-manager-old-k8s-version-069806" in "kube-system" namespace to be "Ready" ...
I0717 20:25:55.200243 933934 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gh8ms" in "kube-system" namespace to be "Ready" ...
I0717 20:25:55.207436 933934 pod_ready.go:92] pod "kube-proxy-gh8ms" in "kube-system" namespace has status "Ready":"True"
I0717 20:25:55.207465 933934 pod_ready.go:81] duration metric: took 7.18782ms for pod "kube-proxy-gh8ms" in "kube-system" namespace to be "Ready" ...
I0717 20:25:55.207478 933934 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-069806" in "kube-system" namespace to be "Ready" ...
I0717 20:25:57.213200 933934 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"False"
I0717 20:25:59.213492 933934 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"False"
I0717 20:26:01.214393 933934 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"False"
I0717 20:26:03.713956 933934 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"False"
I0717 20:26:05.714202 933934 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"False"
I0717 20:26:08.213553 933934 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"False"
I0717 20:26:10.213661 933934 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"False"
I0717 20:26:12.714548 933934 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"False"
I0717 20:26:14.213352 933934 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-069806" in "kube-system" namespace has status "Ready":"True"
I0717 20:26:14.213379 933934 pod_ready.go:81] duration metric: took 19.005892685s for pod "kube-scheduler-old-k8s-version-069806" in "kube-system" namespace to be "Ready" ...
I0717 20:26:14.213391 933934 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace to be "Ready" ...
I0717 20:26:16.221414 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:26:18.719896 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:26:20.720706 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:26:23.220667 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:26:25.719491 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:26:27.719692 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:26:29.720184 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:26:32.220097 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:26:34.720328 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:26:37.219864 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:26:39.721375 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:26:42.226155 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:26:44.719617 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:26:46.719723 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:26:49.219313 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:26:51.220151 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:26:53.729226 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:26:56.220752 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:26:58.719408 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:27:00.720193 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:27:03.219886 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:27:05.220078 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:27:07.220203 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:27:09.720199 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:27:12.219965 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:27:14.719959 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:27:17.219762 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:27:19.220260 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:27:21.220661 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:27:23.220748 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:27:25.720106 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:27:28.220216 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:27:30.220441 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:27:32.719292 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:27:34.720242 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:27:36.720522 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:27:39.221433 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:27:41.720274 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:27:44.220184 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:27:46.724395 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:27:49.218938 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:27:51.219481 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:27:53.219983 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:27:55.220270 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:27:57.719497 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:27:59.720407 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:28:02.220190 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:28:04.720534 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:28:07.219134 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:28:09.719989 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:28:12.220145 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:28:14.719707 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:28:16.720325 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:28:18.722544 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:28:21.219697 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:28:23.719827 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:28:25.720412 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:28:28.219386 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:28:30.219889 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:28:32.226424 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:28:34.720444 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:28:36.720701 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:28:38.756765 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:28:41.225850 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:28:43.719944 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:28:45.720736 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:28:48.220347 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:28:50.220684 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:28:52.719868 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:28:54.720494 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:28:57.219972 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:28:59.720891 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:29:02.220407 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:29:04.719933 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:29:06.720421 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:29:09.230002 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:29:11.719629 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:29:13.720015 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:29:16.220492 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:29:18.719214 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:29:20.719473 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:29:22.719838 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:29:24.720250 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:29:27.219396 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:29:29.246976 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:29:31.720845 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:29:34.220992 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:29:36.225870 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:29:38.720593 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:29:41.219768 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:29:43.221273 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:29:45.222025 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:29:47.719568 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:29:49.720268 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:29:52.219872 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:29:54.719439 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:29:57.220414 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:29:59.232394 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:30:01.721527 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:30:04.220526 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:30:06.220767 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:30:08.719491 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:30:10.735195 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:30:13.221116 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:30:14.213455 933934 pod_ready.go:81] duration metric: took 4m0.000039803s for pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace to be "Ready" ...
E0717 20:30:14.213486 933934 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
I0717 20:30:14.213497 933934 pod_ready.go:38] duration metric: took 5m27.099755562s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0717 20:30:14.213513 933934 api_server.go:52] waiting for apiserver process to appear ...
I0717 20:30:14.213547 933934 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0717 20:30:14.213608 933934 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0717 20:30:14.267329 933934 cri.go:89] found id: "cd37e487a62960309ed82922e0a8a525350123f65c49e5f04cfa975054d7d21c"
I0717 20:30:14.267350 933934 cri.go:89] found id: "f910fbeca2eebc637c9085897f65b8f1c51713acff4ad29cfb0075f1f3ec10bc"
I0717 20:30:14.267356 933934 cri.go:89] found id: ""
I0717 20:30:14.267364 933934 logs.go:276] 2 containers: [cd37e487a62960309ed82922e0a8a525350123f65c49e5f04cfa975054d7d21c f910fbeca2eebc637c9085897f65b8f1c51713acff4ad29cfb0075f1f3ec10bc]
I0717 20:30:14.267435 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:14.279495 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:14.283694 933934 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0717 20:30:14.283767 933934 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0717 20:30:14.332584 933934 cri.go:89] found id: "687abee290206c4efc81f20fc7a1ed7bbd6be9374ad6bbb86d674109b406be0c"
I0717 20:30:14.332625 933934 cri.go:89] found id: "e570d714588a47af78dc81d95545a2a3f0ac0ae3acb938112dc2a7230b0a49b1"
I0717 20:30:14.332631 933934 cri.go:89] found id: ""
I0717 20:30:14.332639 933934 logs.go:276] 2 containers: [687abee290206c4efc81f20fc7a1ed7bbd6be9374ad6bbb86d674109b406be0c e570d714588a47af78dc81d95545a2a3f0ac0ae3acb938112dc2a7230b0a49b1]
I0717 20:30:14.332701 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:14.337032 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:14.341616 933934 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0717 20:30:14.341702 933934 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0717 20:30:14.416560 933934 cri.go:89] found id: "6dc999a35d5b7cabe60ac89e4d0eeda4c16d4bdf70c86b94f130a692166a53d4"
I0717 20:30:14.416585 933934 cri.go:89] found id: "03f122fd4abc96b4162eaa9b0110afa04333b847fc74cafe0a68ca5d30990294"
I0717 20:30:14.416590 933934 cri.go:89] found id: ""
I0717 20:30:14.416597 933934 logs.go:276] 2 containers: [6dc999a35d5b7cabe60ac89e4d0eeda4c16d4bdf70c86b94f130a692166a53d4 03f122fd4abc96b4162eaa9b0110afa04333b847fc74cafe0a68ca5d30990294]
I0717 20:30:14.416653 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:14.420874 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:14.426407 933934 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0717 20:30:14.426493 933934 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0717 20:30:14.474023 933934 cri.go:89] found id: "d6de24a696fe6c467e60ebcdf508df032838133f4f19467258e96b7f9bfaaf75"
I0717 20:30:14.474043 933934 cri.go:89] found id: "bcdcb609ea0d91709769d3d302be011074525901ec1c642b6b5f14c7eb1217ef"
I0717 20:30:14.474048 933934 cri.go:89] found id: ""
I0717 20:30:14.474055 933934 logs.go:276] 2 containers: [d6de24a696fe6c467e60ebcdf508df032838133f4f19467258e96b7f9bfaaf75 bcdcb609ea0d91709769d3d302be011074525901ec1c642b6b5f14c7eb1217ef]
I0717 20:30:14.474115 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:14.479393 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:14.483562 933934 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0717 20:30:14.483645 933934 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0717 20:30:14.546766 933934 cri.go:89] found id: "4157f24e3104cf922fe5deb02fbefd7da3728b0bdc290a7c09353bca677fd608"
I0717 20:30:14.546789 933934 cri.go:89] found id: "3a29c0026d94090fe1242ea0503bd2f78ae5aa0004fecb1cea584a49f5ddc1f4"
I0717 20:30:14.546794 933934 cri.go:89] found id: ""
I0717 20:30:14.546801 933934 logs.go:276] 2 containers: [4157f24e3104cf922fe5deb02fbefd7da3728b0bdc290a7c09353bca677fd608 3a29c0026d94090fe1242ea0503bd2f78ae5aa0004fecb1cea584a49f5ddc1f4]
I0717 20:30:14.546858 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:14.551041 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:14.555838 933934 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0717 20:30:14.555914 933934 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0717 20:30:14.607756 933934 cri.go:89] found id: "87f12b148cc5c4d4ab511a09ea624cefb8ca24845ab582cb5171cac706cfab08"
I0717 20:30:14.607786 933934 cri.go:89] found id: "fdcb50d15d02756e4b375373cc89aee7407bf6f49813ed607f9a66e11be2c02f"
I0717 20:30:14.607793 933934 cri.go:89] found id: ""
I0717 20:30:14.607800 933934 logs.go:276] 2 containers: [87f12b148cc5c4d4ab511a09ea624cefb8ca24845ab582cb5171cac706cfab08 fdcb50d15d02756e4b375373cc89aee7407bf6f49813ed607f9a66e11be2c02f]
I0717 20:30:14.607870 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:14.611951 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:14.615691 933934 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0717 20:30:14.615756 933934 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0717 20:30:14.665764 933934 cri.go:89] found id: "1f4da85e269235a6b1b8eba0cc55aa66708531b2d77a77c6fd4c1b3d649ff874"
I0717 20:30:14.665792 933934 cri.go:89] found id: "62bdf5d2d244d2f8c5398ebc927cf85a2d9e82aa5dae832ea5ce7dd7bf6e863b"
I0717 20:30:14.665797 933934 cri.go:89] found id: ""
I0717 20:30:14.665805 933934 logs.go:276] 2 containers: [1f4da85e269235a6b1b8eba0cc55aa66708531b2d77a77c6fd4c1b3d649ff874 62bdf5d2d244d2f8c5398ebc927cf85a2d9e82aa5dae832ea5ce7dd7bf6e863b]
I0717 20:30:14.665859 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:14.672728 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:14.676956 933934 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0717 20:30:14.677042 933934 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0717 20:30:14.739516 933934 cri.go:89] found id: "57b729d79f17dbcfa3bb0c46429504d8399588e217c6bd8ca2e38947ff5d4cf0"
I0717 20:30:14.739541 933934 cri.go:89] found id: "89d02f9d8a4af9b6c7a8ebd5d1b7850c32625b8302d57fba3cc86f4f4b3513f5"
I0717 20:30:14.739562 933934 cri.go:89] found id: ""
I0717 20:30:14.739570 933934 logs.go:276] 2 containers: [57b729d79f17dbcfa3bb0c46429504d8399588e217c6bd8ca2e38947ff5d4cf0 89d02f9d8a4af9b6c7a8ebd5d1b7850c32625b8302d57fba3cc86f4f4b3513f5]
I0717 20:30:14.739628 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:14.743476 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:14.748788 933934 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0717 20:30:14.748859 933934 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0717 20:30:14.825367 933934 cri.go:89] found id: "44895de2582221beb2740dcf68a96bd5aab3c26b61e7fd33bace4c9538e555f1"
I0717 20:30:14.825454 933934 cri.go:89] found id: ""
I0717 20:30:14.825477 933934 logs.go:276] 1 containers: [44895de2582221beb2740dcf68a96bd5aab3c26b61e7fd33bace4c9538e555f1]
I0717 20:30:14.825567 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:14.829469 933934 logs.go:123] Gathering logs for containerd ...
I0717 20:30:14.829492 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0717 20:30:14.951471 933934 logs.go:123] Gathering logs for container status ...
I0717 20:30:14.951551 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0717 20:30:15.002401 933934 logs.go:123] Gathering logs for describe nodes ...
I0717 20:30:15.002553 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0717 20:30:15.257508 933934 logs.go:123] Gathering logs for kube-apiserver [f910fbeca2eebc637c9085897f65b8f1c51713acff4ad29cfb0075f1f3ec10bc] ...
I0717 20:30:15.257583 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f910fbeca2eebc637c9085897f65b8f1c51713acff4ad29cfb0075f1f3ec10bc"
I0717 20:30:15.326653 933934 logs.go:123] Gathering logs for etcd [687abee290206c4efc81f20fc7a1ed7bbd6be9374ad6bbb86d674109b406be0c] ...
I0717 20:30:15.326685 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 687abee290206c4efc81f20fc7a1ed7bbd6be9374ad6bbb86d674109b406be0c"
I0717 20:30:15.396670 933934 logs.go:123] Gathering logs for etcd [e570d714588a47af78dc81d95545a2a3f0ac0ae3acb938112dc2a7230b0a49b1] ...
I0717 20:30:15.396750 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e570d714588a47af78dc81d95545a2a3f0ac0ae3acb938112dc2a7230b0a49b1"
I0717 20:30:15.473355 933934 logs.go:123] Gathering logs for kube-controller-manager [fdcb50d15d02756e4b375373cc89aee7407bf6f49813ed607f9a66e11be2c02f] ...
I0717 20:30:15.473711 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdcb50d15d02756e4b375373cc89aee7407bf6f49813ed607f9a66e11be2c02f"
I0717 20:30:15.552525 933934 logs.go:123] Gathering logs for storage-provisioner [57b729d79f17dbcfa3bb0c46429504d8399588e217c6bd8ca2e38947ff5d4cf0] ...
I0717 20:30:15.552603 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57b729d79f17dbcfa3bb0c46429504d8399588e217c6bd8ca2e38947ff5d4cf0"
I0717 20:30:15.637695 933934 logs.go:123] Gathering logs for kindnet [62bdf5d2d244d2f8c5398ebc927cf85a2d9e82aa5dae832ea5ce7dd7bf6e863b] ...
I0717 20:30:15.637726 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62bdf5d2d244d2f8c5398ebc927cf85a2d9e82aa5dae832ea5ce7dd7bf6e863b"
I0717 20:30:15.746665 933934 logs.go:123] Gathering logs for kubelet ...
I0717 20:30:15.746710 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0717 20:30:15.806130 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:47 old-k8s-version-069806 kubelet[662]: E0717 20:24:47.042220 662 reflector.go:138] object-"kube-system"/"kube-proxy-token-7976r": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-7976r" is forbidden: User "system:node:old-k8s-version-069806" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-069806' and this object
W0717 20:30:15.806356 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:47 old-k8s-version-069806 kubelet[662]: E0717 20:24:47.082566 662 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-069806" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-069806' and this object
W0717 20:30:15.806573 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:47 old-k8s-version-069806 kubelet[662]: E0717 20:24:47.082721 662 reflector.go:138] object-"kube-system"/"coredns-token-mrwzx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-mrwzx" is forbidden: User "system:node:old-k8s-version-069806" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-069806' and this object
W0717 20:30:15.806781 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:47 old-k8s-version-069806 kubelet[662]: E0717 20:24:47.082802 662 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-069806" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-069806' and this object
W0717 20:30:15.807012 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:47 old-k8s-version-069806 kubelet[662]: E0717 20:24:47.082891 662 reflector.go:138] object-"kube-system"/"kindnet-token-g6tv7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-g6tv7" is forbidden: User "system:node:old-k8s-version-069806" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-069806' and this object
W0717 20:30:15.810861 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:47 old-k8s-version-069806 kubelet[662]: E0717 20:24:47.202724 662 reflector.go:138] object-"kube-system"/"metrics-server-token-fksnv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-fksnv" is forbidden: User "system:node:old-k8s-version-069806" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-069806' and this object
W0717 20:30:15.811163 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:47 old-k8s-version-069806 kubelet[662]: E0717 20:24:47.202810 662 reflector.go:138] object-"kube-system"/"storage-provisioner-token-rtf2k": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-rtf2k" is forbidden: User "system:node:old-k8s-version-069806" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-069806' and this object
W0717 20:30:15.811378 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:47 old-k8s-version-069806 kubelet[662]: E0717 20:24:47.202881 662 reflector.go:138] object-"default"/"default-token-9ftpp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-9ftpp" is forbidden: User "system:node:old-k8s-version-069806" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-069806' and this object
W0717 20:30:15.819326 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:49 old-k8s-version-069806 kubelet[662]: E0717 20:24:49.183838 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:30:15.820329 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:50 old-k8s-version-069806 kubelet[662]: E0717 20:24:50.150410 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.823214 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:01 old-k8s-version-069806 kubelet[662]: E0717 20:25:01.900030 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:30:15.825532 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:11 old-k8s-version-069806 kubelet[662]: E0717 20:25:11.253734 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.825892 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:12 old-k8s-version-069806 kubelet[662]: E0717 20:25:12.256602 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.826084 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:12 old-k8s-version-069806 kubelet[662]: E0717 20:25:12.874626 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.826439 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:16 old-k8s-version-069806 kubelet[662]: E0717 20:25:16.266437 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.827342 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:20 old-k8s-version-069806 kubelet[662]: E0717 20:25:20.299236 662 pod_workers.go:191] Error syncing pod b733c4a6-f6de-426d-86c9-67948261d437 ("storage-provisioner_kube-system(b733c4a6-f6de-426d-86c9-67948261d437)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b733c4a6-f6de-426d-86c9-67948261d437)"
W0717 20:30:15.830056 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:27 old-k8s-version-069806 kubelet[662]: E0717 20:25:27.898396 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:30:15.831034 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:30 old-k8s-version-069806 kubelet[662]: E0717 20:25:30.313468 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.831554 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:36 old-k8s-version-069806 kubelet[662]: E0717 20:25:36.266512 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.831804 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:40 old-k8s-version-069806 kubelet[662]: E0717 20:25:40.874479 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.832142 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:51 old-k8s-version-069806 kubelet[662]: E0717 20:25:51.875881 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.832612 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:53 old-k8s-version-069806 kubelet[662]: E0717 20:25:53.400984 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.832942 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:56 old-k8s-version-069806 kubelet[662]: E0717 20:25:56.266353 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.833146 933934 logs.go:138] Found kubelet problem: Jul 17 20:26:03 old-k8s-version-069806 kubelet[662]: E0717 20:26:03.874232 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.833523 933934 logs.go:138] Found kubelet problem: Jul 17 20:26:09 old-k8s-version-069806 kubelet[662]: E0717 20:26:09.873806 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.836067 933934 logs.go:138] Found kubelet problem: Jul 17 20:26:17 old-k8s-version-069806 kubelet[662]: E0717 20:26:17.885290 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:30:15.836428 933934 logs.go:138] Found kubelet problem: Jul 17 20:26:21 old-k8s-version-069806 kubelet[662]: E0717 20:26:21.873819 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.836617 933934 logs.go:138] Found kubelet problem: Jul 17 20:26:30 old-k8s-version-069806 kubelet[662]: E0717 20:26:30.874108 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.837267 933934 logs.go:138] Found kubelet problem: Jul 17 20:26:37 old-k8s-version-069806 kubelet[662]: E0717 20:26:37.530420 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.837473 933934 logs.go:138] Found kubelet problem: Jul 17 20:26:45 old-k8s-version-069806 kubelet[662]: E0717 20:26:45.874342 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.837808 933934 logs.go:138] Found kubelet problem: Jul 17 20:26:46 old-k8s-version-069806 kubelet[662]: E0717 20:26:46.266436 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.838137 933934 logs.go:138] Found kubelet problem: Jul 17 20:26:56 old-k8s-version-069806 kubelet[662]: E0717 20:26:56.873836 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.838324 933934 logs.go:138] Found kubelet problem: Jul 17 20:27:00 old-k8s-version-069806 kubelet[662]: E0717 20:27:00.874137 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.838656 933934 logs.go:138] Found kubelet problem: Jul 17 20:27:11 old-k8s-version-069806 kubelet[662]: E0717 20:27:11.873895 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.838842 933934 logs.go:138] Found kubelet problem: Jul 17 20:27:13 old-k8s-version-069806 kubelet[662]: E0717 20:27:13.874116 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.839175 933934 logs.go:138] Found kubelet problem: Jul 17 20:27:24 old-k8s-version-069806 kubelet[662]: E0717 20:27:24.873860 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.839382 933934 logs.go:138] Found kubelet problem: Jul 17 20:27:28 old-k8s-version-069806 kubelet[662]: E0717 20:27:28.874119 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.839712 933934 logs.go:138] Found kubelet problem: Jul 17 20:27:37 old-k8s-version-069806 kubelet[662]: E0717 20:27:37.874238 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.842301 933934 logs.go:138] Found kubelet problem: Jul 17 20:27:41 old-k8s-version-069806 kubelet[662]: E0717 20:27:41.890695 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:30:15.842898 933934 logs.go:138] Found kubelet problem: Jul 17 20:27:48 old-k8s-version-069806 kubelet[662]: E0717 20:27:48.873801 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.843101 933934 logs.go:138] Found kubelet problem: Jul 17 20:27:56 old-k8s-version-069806 kubelet[662]: E0717 20:27:56.874330 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.843697 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:01 old-k8s-version-069806 kubelet[662]: E0717 20:28:01.743439 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.844026 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:06 old-k8s-version-069806 kubelet[662]: E0717 20:28:06.266883 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.844220 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:07 old-k8s-version-069806 kubelet[662]: E0717 20:28:07.874945 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.844574 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:19 old-k8s-version-069806 kubelet[662]: E0717 20:28:19.877659 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.844765 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:22 old-k8s-version-069806 kubelet[662]: E0717 20:28:22.874306 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.845161 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:33 old-k8s-version-069806 kubelet[662]: E0717 20:28:33.876747 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.845353 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:36 old-k8s-version-069806 kubelet[662]: E0717 20:28:36.874299 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.845696 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:46 old-k8s-version-069806 kubelet[662]: E0717 20:28:46.874249 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.845885 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:48 old-k8s-version-069806 kubelet[662]: E0717 20:28:48.874132 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.846215 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:58 old-k8s-version-069806 kubelet[662]: E0717 20:28:58.873786 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.846400 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:59 old-k8s-version-069806 kubelet[662]: E0717 20:28:59.874380 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.846728 933934 logs.go:138] Found kubelet problem: Jul 17 20:29:11 old-k8s-version-069806 kubelet[662]: E0717 20:29:11.874254 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.846913 933934 logs.go:138] Found kubelet problem: Jul 17 20:29:12 old-k8s-version-069806 kubelet[662]: E0717 20:29:12.874187 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.847274 933934 logs.go:138] Found kubelet problem: Jul 17 20:29:26 old-k8s-version-069806 kubelet[662]: E0717 20:29:26.873840 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.847468 933934 logs.go:138] Found kubelet problem: Jul 17 20:29:27 old-k8s-version-069806 kubelet[662]: E0717 20:29:27.875007 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.847853 933934 logs.go:138] Found kubelet problem: Jul 17 20:29:38 old-k8s-version-069806 kubelet[662]: E0717 20:29:38.874336 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.848049 933934 logs.go:138] Found kubelet problem: Jul 17 20:29:39 old-k8s-version-069806 kubelet[662]: E0717 20:29:39.874118 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.848393 933934 logs.go:138] Found kubelet problem: Jul 17 20:29:49 old-k8s-version-069806 kubelet[662]: E0717 20:29:49.873788 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.848587 933934 logs.go:138] Found kubelet problem: Jul 17 20:29:53 old-k8s-version-069806 kubelet[662]: E0717 20:29:53.876300 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.848920 933934 logs.go:138] Found kubelet problem: Jul 17 20:30:02 old-k8s-version-069806 kubelet[662]: E0717 20:30:02.874275 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.849105 933934 logs.go:138] Found kubelet problem: Jul 17 20:30:07 old-k8s-version-069806 kubelet[662]: E0717 20:30:07.874813 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.849436 933934 logs.go:138] Found kubelet problem: Jul 17 20:30:14 old-k8s-version-069806 kubelet[662]: E0717 20:30:14.873785 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
I0717 20:30:15.849448 933934 logs.go:123] Gathering logs for coredns [6dc999a35d5b7cabe60ac89e4d0eeda4c16d4bdf70c86b94f130a692166a53d4] ...
I0717 20:30:15.849464 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dc999a35d5b7cabe60ac89e4d0eeda4c16d4bdf70c86b94f130a692166a53d4"
I0717 20:30:15.925345 933934 logs.go:123] Gathering logs for kube-scheduler [d6de24a696fe6c467e60ebcdf508df032838133f4f19467258e96b7f9bfaaf75] ...
I0717 20:30:15.925372 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6de24a696fe6c467e60ebcdf508df032838133f4f19467258e96b7f9bfaaf75"
I0717 20:30:15.979819 933934 logs.go:123] Gathering logs for kube-controller-manager [87f12b148cc5c4d4ab511a09ea624cefb8ca24845ab582cb5171cac706cfab08] ...
I0717 20:30:15.979850 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87f12b148cc5c4d4ab511a09ea624cefb8ca24845ab582cb5171cac706cfab08"
I0717 20:30:16.115833 933934 logs.go:123] Gathering logs for storage-provisioner [89d02f9d8a4af9b6c7a8ebd5d1b7850c32625b8302d57fba3cc86f4f4b3513f5] ...
I0717 20:30:16.115883 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89d02f9d8a4af9b6c7a8ebd5d1b7850c32625b8302d57fba3cc86f4f4b3513f5"
I0717 20:30:16.208534 933934 logs.go:123] Gathering logs for kubernetes-dashboard [44895de2582221beb2740dcf68a96bd5aab3c26b61e7fd33bace4c9538e555f1] ...
I0717 20:30:16.208558 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44895de2582221beb2740dcf68a96bd5aab3c26b61e7fd33bace4c9538e555f1"
I0717 20:30:16.281385 933934 logs.go:123] Gathering logs for kindnet [1f4da85e269235a6b1b8eba0cc55aa66708531b2d77a77c6fd4c1b3d649ff874] ...
I0717 20:30:16.281410 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f4da85e269235a6b1b8eba0cc55aa66708531b2d77a77c6fd4c1b3d649ff874"
I0717 20:30:16.387166 933934 logs.go:123] Gathering logs for dmesg ...
I0717 20:30:16.387204 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0717 20:30:16.451258 933934 logs.go:123] Gathering logs for kube-apiserver [cd37e487a62960309ed82922e0a8a525350123f65c49e5f04cfa975054d7d21c] ...
I0717 20:30:16.451330 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd37e487a62960309ed82922e0a8a525350123f65c49e5f04cfa975054d7d21c"
I0717 20:30:16.530536 933934 logs.go:123] Gathering logs for coredns [03f122fd4abc96b4162eaa9b0110afa04333b847fc74cafe0a68ca5d30990294] ...
I0717 20:30:16.530569 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03f122fd4abc96b4162eaa9b0110afa04333b847fc74cafe0a68ca5d30990294"
I0717 20:30:16.587298 933934 logs.go:123] Gathering logs for kube-scheduler [bcdcb609ea0d91709769d3d302be011074525901ec1c642b6b5f14c7eb1217ef] ...
I0717 20:30:16.587323 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcdcb609ea0d91709769d3d302be011074525901ec1c642b6b5f14c7eb1217ef"
I0717 20:30:16.653727 933934 logs.go:123] Gathering logs for kube-proxy [4157f24e3104cf922fe5deb02fbefd7da3728b0bdc290a7c09353bca677fd608] ...
I0717 20:30:16.653809 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4157f24e3104cf922fe5deb02fbefd7da3728b0bdc290a7c09353bca677fd608"
I0717 20:30:16.705193 933934 logs.go:123] Gathering logs for kube-proxy [3a29c0026d94090fe1242ea0503bd2f78ae5aa0004fecb1cea584a49f5ddc1f4] ...
I0717 20:30:16.705229 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a29c0026d94090fe1242ea0503bd2f78ae5aa0004fecb1cea584a49f5ddc1f4"
I0717 20:30:16.766943 933934 out.go:304] Setting ErrFile to fd 2...
I0717 20:30:16.766971 933934 out.go:338] TERM=,COLORTERM=, which probably does not support color
W0717 20:30:16.767041 933934 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0717 20:30:16.767063 933934 out.go:239] Jul 17 20:29:49 old-k8s-version-069806 kubelet[662]: E0717 20:29:49.873788 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
Jul 17 20:29:49 old-k8s-version-069806 kubelet[662]: E0717 20:29:49.873788 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:16.767086 933934 out.go:239] Jul 17 20:29:53 old-k8s-version-069806 kubelet[662]: E0717 20:29:53.876300 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 17 20:29:53 old-k8s-version-069806 kubelet[662]: E0717 20:29:53.876300 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:16.767107 933934 out.go:239] Jul 17 20:30:02 old-k8s-version-069806 kubelet[662]: E0717 20:30:02.874275 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
Jul 17 20:30:02 old-k8s-version-069806 kubelet[662]: E0717 20:30:02.874275 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:16.767119 933934 out.go:239] Jul 17 20:30:07 old-k8s-version-069806 kubelet[662]: E0717 20:30:07.874813 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 17 20:30:07 old-k8s-version-069806 kubelet[662]: E0717 20:30:07.874813 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:16.767134 933934 out.go:239] Jul 17 20:30:14 old-k8s-version-069806 kubelet[662]: E0717 20:30:14.873785 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
Jul 17 20:30:14 old-k8s-version-069806 kubelet[662]: E0717 20:30:14.873785 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
I0717 20:30:16.767141 933934 out.go:304] Setting ErrFile to fd 2...
I0717 20:30:16.767155 933934 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 20:30:26.768428 933934 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0717 20:30:26.788723 933934 api_server.go:72] duration metric: took 5m56.839602005s to wait for apiserver process to appear ...
I0717 20:30:26.788747 933934 api_server.go:88] waiting for apiserver healthz status ...
I0717 20:30:26.788782 933934 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0717 20:30:26.788852 933934 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0717 20:30:26.854663 933934 cri.go:89] found id: "cd37e487a62960309ed82922e0a8a525350123f65c49e5f04cfa975054d7d21c"
I0717 20:30:26.854684 933934 cri.go:89] found id: "f910fbeca2eebc637c9085897f65b8f1c51713acff4ad29cfb0075f1f3ec10bc"
I0717 20:30:26.854699 933934 cri.go:89] found id: ""
I0717 20:30:26.854706 933934 logs.go:276] 2 containers: [cd37e487a62960309ed82922e0a8a525350123f65c49e5f04cfa975054d7d21c f910fbeca2eebc637c9085897f65b8f1c51713acff4ad29cfb0075f1f3ec10bc]
I0717 20:30:26.854760 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:26.859009 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:26.866186 933934 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0717 20:30:26.866263 933934 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0717 20:30:26.950492 933934 cri.go:89] found id: "687abee290206c4efc81f20fc7a1ed7bbd6be9374ad6bbb86d674109b406be0c"
I0717 20:30:26.950587 933934 cri.go:89] found id: "e570d714588a47af78dc81d95545a2a3f0ac0ae3acb938112dc2a7230b0a49b1"
I0717 20:30:26.950628 933934 cri.go:89] found id: ""
I0717 20:30:26.950673 933934 logs.go:276] 2 containers: [687abee290206c4efc81f20fc7a1ed7bbd6be9374ad6bbb86d674109b406be0c e570d714588a47af78dc81d95545a2a3f0ac0ae3acb938112dc2a7230b0a49b1]
I0717 20:30:26.950792 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:26.957062 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:26.961377 933934 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0717 20:30:26.961473 933934 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0717 20:30:27.059207 933934 cri.go:89] found id: "6dc999a35d5b7cabe60ac89e4d0eeda4c16d4bdf70c86b94f130a692166a53d4"
I0717 20:30:27.059234 933934 cri.go:89] found id: "03f122fd4abc96b4162eaa9b0110afa04333b847fc74cafe0a68ca5d30990294"
I0717 20:30:27.059245 933934 cri.go:89] found id: ""
I0717 20:30:27.059261 933934 logs.go:276] 2 containers: [6dc999a35d5b7cabe60ac89e4d0eeda4c16d4bdf70c86b94f130a692166a53d4 03f122fd4abc96b4162eaa9b0110afa04333b847fc74cafe0a68ca5d30990294]
I0717 20:30:27.059365 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:27.067097 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:27.075702 933934 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0717 20:30:27.075928 933934 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0717 20:30:27.167718 933934 cri.go:89] found id: "d6de24a696fe6c467e60ebcdf508df032838133f4f19467258e96b7f9bfaaf75"
I0717 20:30:27.167820 933934 cri.go:89] found id: "bcdcb609ea0d91709769d3d302be011074525901ec1c642b6b5f14c7eb1217ef"
I0717 20:30:27.167851 933934 cri.go:89] found id: ""
I0717 20:30:27.167885 933934 logs.go:276] 2 containers: [d6de24a696fe6c467e60ebcdf508df032838133f4f19467258e96b7f9bfaaf75 bcdcb609ea0d91709769d3d302be011074525901ec1c642b6b5f14c7eb1217ef]
I0717 20:30:27.168019 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:27.178946 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:27.187082 933934 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0717 20:30:27.187280 933934 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0717 20:30:27.264168 933934 cri.go:89] found id: "4157f24e3104cf922fe5deb02fbefd7da3728b0bdc290a7c09353bca677fd608"
I0717 20:30:27.264295 933934 cri.go:89] found id: "3a29c0026d94090fe1242ea0503bd2f78ae5aa0004fecb1cea584a49f5ddc1f4"
I0717 20:30:27.264323 933934 cri.go:89] found id: ""
I0717 20:30:27.264369 933934 logs.go:276] 2 containers: [4157f24e3104cf922fe5deb02fbefd7da3728b0bdc290a7c09353bca677fd608 3a29c0026d94090fe1242ea0503bd2f78ae5aa0004fecb1cea584a49f5ddc1f4]
I0717 20:30:27.264494 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:27.271255 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:27.276920 933934 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0717 20:30:27.277160 933934 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0717 20:30:27.352163 933934 cri.go:89] found id: "87f12b148cc5c4d4ab511a09ea624cefb8ca24845ab582cb5171cac706cfab08"
I0717 20:30:27.352255 933934 cri.go:89] found id: "fdcb50d15d02756e4b375373cc89aee7407bf6f49813ed607f9a66e11be2c02f"
I0717 20:30:27.352279 933934 cri.go:89] found id: ""
I0717 20:30:27.352338 933934 logs.go:276] 2 containers: [87f12b148cc5c4d4ab511a09ea624cefb8ca24845ab582cb5171cac706cfab08 fdcb50d15d02756e4b375373cc89aee7407bf6f49813ed607f9a66e11be2c02f]
I0717 20:30:27.352453 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:27.367677 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:27.371311 933934 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0717 20:30:27.371398 933934 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0717 20:30:27.429211 933934 cri.go:89] found id: "1f4da85e269235a6b1b8eba0cc55aa66708531b2d77a77c6fd4c1b3d649ff874"
I0717 20:30:27.429289 933934 cri.go:89] found id: "62bdf5d2d244d2f8c5398ebc927cf85a2d9e82aa5dae832ea5ce7dd7bf6e863b"
I0717 20:30:27.429313 933934 cri.go:89] found id: ""
I0717 20:30:27.429367 933934 logs.go:276] 2 containers: [1f4da85e269235a6b1b8eba0cc55aa66708531b2d77a77c6fd4c1b3d649ff874 62bdf5d2d244d2f8c5398ebc927cf85a2d9e82aa5dae832ea5ce7dd7bf6e863b]
I0717 20:30:27.429472 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:27.436626 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:27.440515 933934 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0717 20:30:27.440675 933934 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0717 20:30:27.514271 933934 cri.go:89] found id: "57b729d79f17dbcfa3bb0c46429504d8399588e217c6bd8ca2e38947ff5d4cf0"
I0717 20:30:27.514316 933934 cri.go:89] found id: "89d02f9d8a4af9b6c7a8ebd5d1b7850c32625b8302d57fba3cc86f4f4b3513f5"
I0717 20:30:27.514322 933934 cri.go:89] found id: ""
I0717 20:30:27.514330 933934 logs.go:276] 2 containers: [57b729d79f17dbcfa3bb0c46429504d8399588e217c6bd8ca2e38947ff5d4cf0 89d02f9d8a4af9b6c7a8ebd5d1b7850c32625b8302d57fba3cc86f4f4b3513f5]
I0717 20:30:27.514397 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:27.519671 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:27.525084 933934 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0717 20:30:27.525187 933934 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0717 20:30:27.589900 933934 cri.go:89] found id: "44895de2582221beb2740dcf68a96bd5aab3c26b61e7fd33bace4c9538e555f1"
I0717 20:30:27.589925 933934 cri.go:89] found id: ""
I0717 20:30:27.589933 933934 logs.go:276] 1 containers: [44895de2582221beb2740dcf68a96bd5aab3c26b61e7fd33bace4c9538e555f1]
I0717 20:30:27.589997 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:27.596715 933934 logs.go:123] Gathering logs for describe nodes ...
I0717 20:30:27.596747 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0717 20:30:27.947618 933934 logs.go:123] Gathering logs for kube-scheduler [d6de24a696fe6c467e60ebcdf508df032838133f4f19467258e96b7f9bfaaf75] ...
I0717 20:30:27.947656 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6de24a696fe6c467e60ebcdf508df032838133f4f19467258e96b7f9bfaaf75"
I0717 20:30:28.092581 933934 logs.go:123] Gathering logs for kube-proxy [3a29c0026d94090fe1242ea0503bd2f78ae5aa0004fecb1cea584a49f5ddc1f4] ...
I0717 20:30:28.092616 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a29c0026d94090fe1242ea0503bd2f78ae5aa0004fecb1cea584a49f5ddc1f4"
I0717 20:30:28.170361 933934 logs.go:123] Gathering logs for storage-provisioner [89d02f9d8a4af9b6c7a8ebd5d1b7850c32625b8302d57fba3cc86f4f4b3513f5] ...
I0717 20:30:28.170432 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89d02f9d8a4af9b6c7a8ebd5d1b7850c32625b8302d57fba3cc86f4f4b3513f5"
I0717 20:30:28.276709 933934 logs.go:123] Gathering logs for kube-proxy [4157f24e3104cf922fe5deb02fbefd7da3728b0bdc290a7c09353bca677fd608] ...
I0717 20:30:28.276780 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4157f24e3104cf922fe5deb02fbefd7da3728b0bdc290a7c09353bca677fd608"
I0717 20:30:28.344367 933934 logs.go:123] Gathering logs for kube-controller-manager [fdcb50d15d02756e4b375373cc89aee7407bf6f49813ed607f9a66e11be2c02f] ...
I0717 20:30:28.344447 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdcb50d15d02756e4b375373cc89aee7407bf6f49813ed607f9a66e11be2c02f"
I0717 20:30:28.451110 933934 logs.go:123] Gathering logs for kindnet [1f4da85e269235a6b1b8eba0cc55aa66708531b2d77a77c6fd4c1b3d649ff874] ...
I0717 20:30:28.451194 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f4da85e269235a6b1b8eba0cc55aa66708531b2d77a77c6fd4c1b3d649ff874"
I0717 20:30:28.542709 933934 logs.go:123] Gathering logs for kindnet [62bdf5d2d244d2f8c5398ebc927cf85a2d9e82aa5dae832ea5ce7dd7bf6e863b] ...
I0717 20:30:28.542802 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62bdf5d2d244d2f8c5398ebc927cf85a2d9e82aa5dae832ea5ce7dd7bf6e863b"
I0717 20:30:28.629609 933934 logs.go:123] Gathering logs for kubelet ...
I0717 20:30:28.629687 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0717 20:30:28.722241 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:47 old-k8s-version-069806 kubelet[662]: E0717 20:24:47.042220 662 reflector.go:138] object-"kube-system"/"kube-proxy-token-7976r": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-7976r" is forbidden: User "system:node:old-k8s-version-069806" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-069806' and this object
W0717 20:30:28.722576 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:47 old-k8s-version-069806 kubelet[662]: E0717 20:24:47.082566 662 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-069806" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-069806' and this object
W0717 20:30:28.722820 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:47 old-k8s-version-069806 kubelet[662]: E0717 20:24:47.082721 662 reflector.go:138] object-"kube-system"/"coredns-token-mrwzx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-mrwzx" is forbidden: User "system:node:old-k8s-version-069806" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-069806' and this object
W0717 20:30:28.723070 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:47 old-k8s-version-069806 kubelet[662]: E0717 20:24:47.082802 662 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-069806" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-069806' and this object
W0717 20:30:28.723310 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:47 old-k8s-version-069806 kubelet[662]: E0717 20:24:47.082891 662 reflector.go:138] object-"kube-system"/"kindnet-token-g6tv7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-g6tv7" is forbidden: User "system:node:old-k8s-version-069806" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-069806' and this object
W0717 20:30:28.727196 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:47 old-k8s-version-069806 kubelet[662]: E0717 20:24:47.202724 662 reflector.go:138] object-"kube-system"/"metrics-server-token-fksnv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-fksnv" is forbidden: User "system:node:old-k8s-version-069806" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-069806' and this object
W0717 20:30:28.730359 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:47 old-k8s-version-069806 kubelet[662]: E0717 20:24:47.202810 662 reflector.go:138] object-"kube-system"/"storage-provisioner-token-rtf2k": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-rtf2k" is forbidden: User "system:node:old-k8s-version-069806" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-069806' and this object
W0717 20:30:28.730625 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:47 old-k8s-version-069806 kubelet[662]: E0717 20:24:47.202881 662 reflector.go:138] object-"default"/"default-token-9ftpp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-9ftpp" is forbidden: User "system:node:old-k8s-version-069806" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-069806' and this object
W0717 20:30:28.738782 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:49 old-k8s-version-069806 kubelet[662]: E0717 20:24:49.183838 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:30:28.745235 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:50 old-k8s-version-069806 kubelet[662]: E0717 20:24:50.150410 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.748361 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:01 old-k8s-version-069806 kubelet[662]: E0717 20:25:01.900030 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:30:28.750574 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:11 old-k8s-version-069806 kubelet[662]: E0717 20:25:11.253734 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.755725 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:12 old-k8s-version-069806 kubelet[662]: E0717 20:25:12.256602 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.755972 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:12 old-k8s-version-069806 kubelet[662]: E0717 20:25:12.874626 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.756377 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:16 old-k8s-version-069806 kubelet[662]: E0717 20:25:16.266437 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.757278 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:20 old-k8s-version-069806 kubelet[662]: E0717 20:25:20.299236 662 pod_workers.go:191] Error syncing pod b733c4a6-f6de-426d-86c9-67948261d437 ("storage-provisioner_kube-system(b733c4a6-f6de-426d-86c9-67948261d437)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b733c4a6-f6de-426d-86c9-67948261d437)"
W0717 20:30:28.760001 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:27 old-k8s-version-069806 kubelet[662]: E0717 20:25:27.898396 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:30:28.761036 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:30 old-k8s-version-069806 kubelet[662]: E0717 20:25:30.313468 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.761536 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:36 old-k8s-version-069806 kubelet[662]: E0717 20:25:36.266512 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.761749 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:40 old-k8s-version-069806 kubelet[662]: E0717 20:25:40.874479 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.762116 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:51 old-k8s-version-069806 kubelet[662]: E0717 20:25:51.875881 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.762616 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:53 old-k8s-version-069806 kubelet[662]: E0717 20:25:53.400984 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.762983 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:56 old-k8s-version-069806 kubelet[662]: E0717 20:25:56.266353 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.763201 933934 logs.go:138] Found kubelet problem: Jul 17 20:26:03 old-k8s-version-069806 kubelet[662]: E0717 20:26:03.874232 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.763561 933934 logs.go:138] Found kubelet problem: Jul 17 20:26:09 old-k8s-version-069806 kubelet[662]: E0717 20:26:09.873806 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.769545 933934 logs.go:138] Found kubelet problem: Jul 17 20:26:17 old-k8s-version-069806 kubelet[662]: E0717 20:26:17.885290 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:30:28.769933 933934 logs.go:138] Found kubelet problem: Jul 17 20:26:21 old-k8s-version-069806 kubelet[662]: E0717 20:26:21.873819 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.770148 933934 logs.go:138] Found kubelet problem: Jul 17 20:26:30 old-k8s-version-069806 kubelet[662]: E0717 20:26:30.874108 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.770789 933934 logs.go:138] Found kubelet problem: Jul 17 20:26:37 old-k8s-version-069806 kubelet[662]: E0717 20:26:37.530420 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.771010 933934 logs.go:138] Found kubelet problem: Jul 17 20:26:45 old-k8s-version-069806 kubelet[662]: E0717 20:26:45.874342 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.771386 933934 logs.go:138] Found kubelet problem: Jul 17 20:26:46 old-k8s-version-069806 kubelet[662]: E0717 20:26:46.266436 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.771754 933934 logs.go:138] Found kubelet problem: Jul 17 20:26:56 old-k8s-version-069806 kubelet[662]: E0717 20:26:56.873836 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.771969 933934 logs.go:138] Found kubelet problem: Jul 17 20:27:00 old-k8s-version-069806 kubelet[662]: E0717 20:27:00.874137 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.772350 933934 logs.go:138] Found kubelet problem: Jul 17 20:27:11 old-k8s-version-069806 kubelet[662]: E0717 20:27:11.873895 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.772565 933934 logs.go:138] Found kubelet problem: Jul 17 20:27:13 old-k8s-version-069806 kubelet[662]: E0717 20:27:13.874116 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.772943 933934 logs.go:138] Found kubelet problem: Jul 17 20:27:24 old-k8s-version-069806 kubelet[662]: E0717 20:27:24.873860 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.773157 933934 logs.go:138] Found kubelet problem: Jul 17 20:27:28 old-k8s-version-069806 kubelet[662]: E0717 20:27:28.874119 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.773519 933934 logs.go:138] Found kubelet problem: Jul 17 20:27:37 old-k8s-version-069806 kubelet[662]: E0717 20:27:37.874238 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.776173 933934 logs.go:138] Found kubelet problem: Jul 17 20:27:41 old-k8s-version-069806 kubelet[662]: E0717 20:27:41.890695 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:30:28.776548 933934 logs.go:138] Found kubelet problem: Jul 17 20:27:48 old-k8s-version-069806 kubelet[662]: E0717 20:27:48.873801 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.776763 933934 logs.go:138] Found kubelet problem: Jul 17 20:27:56 old-k8s-version-069806 kubelet[662]: E0717 20:27:56.874330 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.777477 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:01 old-k8s-version-069806 kubelet[662]: E0717 20:28:01.743439 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.777845 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:06 old-k8s-version-069806 kubelet[662]: E0717 20:28:06.266883 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.778060 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:07 old-k8s-version-069806 kubelet[662]: E0717 20:28:07.874945 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.778428 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:19 old-k8s-version-069806 kubelet[662]: E0717 20:28:19.877659 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.778647 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:22 old-k8s-version-069806 kubelet[662]: E0717 20:28:22.874306 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.779018 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:33 old-k8s-version-069806 kubelet[662]: E0717 20:28:33.876747 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.779245 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:36 old-k8s-version-069806 kubelet[662]: E0717 20:28:36.874299 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.779617 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:46 old-k8s-version-069806 kubelet[662]: E0717 20:28:46.874249 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.779835 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:48 old-k8s-version-069806 kubelet[662]: E0717 20:28:48.874132 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.780232 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:58 old-k8s-version-069806 kubelet[662]: E0717 20:28:58.873786 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.780448 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:59 old-k8s-version-069806 kubelet[662]: E0717 20:28:59.874380 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.780808 933934 logs.go:138] Found kubelet problem: Jul 17 20:29:11 old-k8s-version-069806 kubelet[662]: E0717 20:29:11.874254 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.781029 933934 logs.go:138] Found kubelet problem: Jul 17 20:29:12 old-k8s-version-069806 kubelet[662]: E0717 20:29:12.874187 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.781398 933934 logs.go:138] Found kubelet problem: Jul 17 20:29:26 old-k8s-version-069806 kubelet[662]: E0717 20:29:26.873840 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.781614 933934 logs.go:138] Found kubelet problem: Jul 17 20:29:27 old-k8s-version-069806 kubelet[662]: E0717 20:29:27.875007 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.781979 933934 logs.go:138] Found kubelet problem: Jul 17 20:29:38 old-k8s-version-069806 kubelet[662]: E0717 20:29:38.874336 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.782198 933934 logs.go:138] Found kubelet problem: Jul 17 20:29:39 old-k8s-version-069806 kubelet[662]: E0717 20:29:39.874118 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.782564 933934 logs.go:138] Found kubelet problem: Jul 17 20:29:49 old-k8s-version-069806 kubelet[662]: E0717 20:29:49.873788 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.782843 933934 logs.go:138] Found kubelet problem: Jul 17 20:29:53 old-k8s-version-069806 kubelet[662]: E0717 20:29:53.876300 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.783215 933934 logs.go:138] Found kubelet problem: Jul 17 20:30:02 old-k8s-version-069806 kubelet[662]: E0717 20:30:02.874275 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.783432 933934 logs.go:138] Found kubelet problem: Jul 17 20:30:07 old-k8s-version-069806 kubelet[662]: E0717 20:30:07.874813 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.783792 933934 logs.go:138] Found kubelet problem: Jul 17 20:30:14 old-k8s-version-069806 kubelet[662]: E0717 20:30:14.873785 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.786430 933934 logs.go:138] Found kubelet problem: Jul 17 20:30:22 old-k8s-version-069806 kubelet[662]: E0717 20:30:22.883566 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:30:28.786798 933934 logs.go:138] Found kubelet problem: Jul 17 20:30:25 old-k8s-version-069806 kubelet[662]: E0717 20:30:25.883025 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
I0717 20:30:28.786828 933934 logs.go:123] Gathering logs for coredns [6dc999a35d5b7cabe60ac89e4d0eeda4c16d4bdf70c86b94f130a692166a53d4] ...
I0717 20:30:28.786863 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dc999a35d5b7cabe60ac89e4d0eeda4c16d4bdf70c86b94f130a692166a53d4"
I0717 20:30:28.833770 933934 logs.go:123] Gathering logs for coredns [03f122fd4abc96b4162eaa9b0110afa04333b847fc74cafe0a68ca5d30990294] ...
I0717 20:30:28.833849 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03f122fd4abc96b4162eaa9b0110afa04333b847fc74cafe0a68ca5d30990294"
I0717 20:30:28.895551 933934 logs.go:123] Gathering logs for kube-scheduler [bcdcb609ea0d91709769d3d302be011074525901ec1c642b6b5f14c7eb1217ef] ...
I0717 20:30:28.895628 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcdcb609ea0d91709769d3d302be011074525901ec1c642b6b5f14c7eb1217ef"
I0717 20:30:28.995745 933934 logs.go:123] Gathering logs for containerd ...
I0717 20:30:28.995823 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0717 20:30:29.100068 933934 logs.go:123] Gathering logs for dmesg ...
I0717 20:30:29.100146 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0717 20:30:29.125900 933934 logs.go:123] Gathering logs for kube-apiserver [cd37e487a62960309ed82922e0a8a525350123f65c49e5f04cfa975054d7d21c] ...
I0717 20:30:29.126061 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd37e487a62960309ed82922e0a8a525350123f65c49e5f04cfa975054d7d21c"
I0717 20:30:29.268391 933934 logs.go:123] Gathering logs for etcd [e570d714588a47af78dc81d95545a2a3f0ac0ae3acb938112dc2a7230b0a49b1] ...
I0717 20:30:29.268472 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e570d714588a47af78dc81d95545a2a3f0ac0ae3acb938112dc2a7230b0a49b1"
I0717 20:30:29.345202 933934 logs.go:123] Gathering logs for kube-controller-manager [87f12b148cc5c4d4ab511a09ea624cefb8ca24845ab582cb5171cac706cfab08] ...
I0717 20:30:29.345282 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87f12b148cc5c4d4ab511a09ea624cefb8ca24845ab582cb5171cac706cfab08"
I0717 20:30:29.445718 933934 logs.go:123] Gathering logs for container status ...
I0717 20:30:29.445797 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0717 20:30:29.515085 933934 logs.go:123] Gathering logs for kube-apiserver [f910fbeca2eebc637c9085897f65b8f1c51713acff4ad29cfb0075f1f3ec10bc] ...
I0717 20:30:29.515163 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f910fbeca2eebc637c9085897f65b8f1c51713acff4ad29cfb0075f1f3ec10bc"
I0717 20:30:29.590224 933934 logs.go:123] Gathering logs for etcd [687abee290206c4efc81f20fc7a1ed7bbd6be9374ad6bbb86d674109b406be0c] ...
I0717 20:30:29.590313 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 687abee290206c4efc81f20fc7a1ed7bbd6be9374ad6bbb86d674109b406be0c"
I0717 20:30:29.682136 933934 logs.go:123] Gathering logs for storage-provisioner [57b729d79f17dbcfa3bb0c46429504d8399588e217c6bd8ca2e38947ff5d4cf0] ...
I0717 20:30:29.682207 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57b729d79f17dbcfa3bb0c46429504d8399588e217c6bd8ca2e38947ff5d4cf0"
I0717 20:30:29.861255 933934 logs.go:123] Gathering logs for kubernetes-dashboard [44895de2582221beb2740dcf68a96bd5aab3c26b61e7fd33bace4c9538e555f1] ...
I0717 20:30:29.861335 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44895de2582221beb2740dcf68a96bd5aab3c26b61e7fd33bace4c9538e555f1"
I0717 20:30:30.009936 933934 out.go:304] Setting ErrFile to fd 2...
I0717 20:30:30.010027 933934 out.go:338] TERM=,COLORTERM=, which probably does not support color
W0717 20:30:30.010116 933934 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0717 20:30:30.010456 933934 out.go:239] Jul 17 20:30:02 old-k8s-version-069806 kubelet[662]: E0717 20:30:02.874275 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
Jul 17 20:30:02 old-k8s-version-069806 kubelet[662]: E0717 20:30:02.874275 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:30.010578 933934 out.go:239] Jul 17 20:30:07 old-k8s-version-069806 kubelet[662]: E0717 20:30:07.874813 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 17 20:30:07 old-k8s-version-069806 kubelet[662]: E0717 20:30:07.874813 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:30.010645 933934 out.go:239] Jul 17 20:30:14 old-k8s-version-069806 kubelet[662]: E0717 20:30:14.873785 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
Jul 17 20:30:14 old-k8s-version-069806 kubelet[662]: E0717 20:30:14.873785 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:30.010680 933934 out.go:239] Jul 17 20:30:22 old-k8s-version-069806 kubelet[662]: E0717 20:30:22.883566 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Jul 17 20:30:22 old-k8s-version-069806 kubelet[662]: E0717 20:30:22.883566 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:30:30.010757 933934 out.go:239] Jul 17 20:30:25 old-k8s-version-069806 kubelet[662]: E0717 20:30:25.883025 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
Jul 17 20:30:25 old-k8s-version-069806 kubelet[662]: E0717 20:30:25.883025 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
I0717 20:30:30.010795 933934 out.go:304] Setting ErrFile to fd 2...
I0717 20:30:30.010911 933934 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 20:30:40.013310 933934 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0717 20:30:40.037373 933934 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I0717 20:30:40.040368 933934 out.go:177]
W0717 20:30:40.043794 933934 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0717 20:30:40.043842 933934 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
* Suggestion: Control Plane could not update, try minikube delete --all --purge
W0717 20:30:40.043863 933934 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
* Related issue: https://github.com/kubernetes/minikube/issues/11417
W0717 20:30:40.043869 933934 out.go:239] *
*
W0717 20:30:40.044970 933934 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0717 20:30:40.055997 933934 out.go:177]
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-069806 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect old-k8s-version-069806
helpers_test.go:235: (dbg) docker inspect old-k8s-version-069806:
-- stdout --
[
{
"Id": "97b861eb6dc5d52b0fded597c9871fb26944e5dabef537738c7addf687d7c709",
"Created": "2024-07-17T20:21:10.876011032Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 934149,
"ExitCode": 0,
"Error": "",
"StartedAt": "2024-07-17T20:24:22.947859305Z",
"FinishedAt": "2024-07-17T20:24:21.796731447Z"
},
"Image": "sha256:a11bb5e2546f3ca503b3c47550c3b044149515e29579deea734475d97cc9a2be",
"ResolvConfPath": "/var/lib/docker/containers/97b861eb6dc5d52b0fded597c9871fb26944e5dabef537738c7addf687d7c709/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/97b861eb6dc5d52b0fded597c9871fb26944e5dabef537738c7addf687d7c709/hostname",
"HostsPath": "/var/lib/docker/containers/97b861eb6dc5d52b0fded597c9871fb26944e5dabef537738c7addf687d7c709/hosts",
"LogPath": "/var/lib/docker/containers/97b861eb6dc5d52b0fded597c9871fb26944e5dabef537738c7addf687d7c709/97b861eb6dc5d52b0fded597c9871fb26944e5dabef537738c7addf687d7c709-json.log",
"Name": "/old-k8s-version-069806",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"old-k8s-version-069806:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "old-k8s-version-069806",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/d27b6d411e1237693fc08a11515df81b6e3cbe75df8633a394679e973b33f9be-init/diff:/var/lib/docker/overlay2/330a017725679c1db133a0ed13915b02cfa1878fbe6c4ab5849ef9b0af25d1ee/diff",
"MergedDir": "/var/lib/docker/overlay2/d27b6d411e1237693fc08a11515df81b6e3cbe75df8633a394679e973b33f9be/merged",
"UpperDir": "/var/lib/docker/overlay2/d27b6d411e1237693fc08a11515df81b6e3cbe75df8633a394679e973b33f9be/diff",
"WorkDir": "/var/lib/docker/overlay2/d27b6d411e1237693fc08a11515df81b6e3cbe75df8633a394679e973b33f9be/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "old-k8s-version-069806",
"Source": "/var/lib/docker/volumes/old-k8s-version-069806/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "old-k8s-version-069806",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "old-k8s-version-069806",
"name.minikube.sigs.k8s.io": "old-k8s-version-069806",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "32d379803ca89248da8d7d7ee2997c567f86eeddc7476301fdfe1cf7e167795f",
"SandboxKey": "/var/run/docker/netns/32d379803ca8",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33824"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33825"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33828"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33826"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33827"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"old-k8s-version-069806": {
"IPAMConfig": {
"IPv4Address": "192.168.76.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:4c:02",
"DriverOpts": null,
"NetworkID": "6269ce15976b3fb39eecf0ecfe5482225a9efe38ae5bc9d37d8bff293a8a3ff8",
"EndpointID": "7db2b2710e65d4bff245b00be494c93c32422fc2508c6e42b06d493821f25cec",
"Gateway": "192.168.76.1",
"IPAddress": "192.168.76.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"old-k8s-version-069806",
"97b861eb6dc5"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-069806 -n old-k8s-version-069806
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p old-k8s-version-069806 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-069806 logs -n 25: (2.298861156s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
| start | -p cert-expiration-125673 | cert-expiration-125673 | jenkins | v1.33.1 | 17 Jul 24 20:19 UTC | 17 Jul 24 20:20 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=3m | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | force-systemd-env-336096 | force-systemd-env-336096 | jenkins | v1.33.1 | 17 Jul 24 20:20 UTC | 17 Jul 24 20:20 UTC |
| | ssh cat | | | | | |
| | /etc/containerd/config.toml | | | | | |
| delete | -p force-systemd-env-336096 | force-systemd-env-336096 | jenkins | v1.33.1 | 17 Jul 24 20:20 UTC | 17 Jul 24 20:20 UTC |
| start | -p cert-options-693725 | cert-options-693725 | jenkins | v1.33.1 | 17 Jul 24 20:20 UTC | 17 Jul 24 20:21 UTC |
| | --memory=2048 | | | | | |
| | --apiserver-ips=127.0.0.1 | | | | | |
| | --apiserver-ips=192.168.15.15 | | | | | |
| | --apiserver-names=localhost | | | | | |
| | --apiserver-names=www.google.com | | | | | |
| | --apiserver-port=8555 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | cert-options-693725 ssh | cert-options-693725 | jenkins | v1.33.1 | 17 Jul 24 20:21 UTC | 17 Jul 24 20:21 UTC |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-693725 -- sudo | cert-options-693725 | jenkins | v1.33.1 | 17 Jul 24 20:21 UTC | 17 Jul 24 20:21 UTC |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-693725 | cert-options-693725 | jenkins | v1.33.1 | 17 Jul 24 20:21 UTC | 17 Jul 24 20:21 UTC |
| start | -p old-k8s-version-069806 | old-k8s-version-069806 | jenkins | v1.33.1 | 17 Jul 24 20:21 UTC | 17 Jul 24 20:23 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| start | -p cert-expiration-125673 | cert-expiration-125673 | jenkins | v1.33.1 | 17 Jul 24 20:23 UTC | 17 Jul 24 20:23 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p cert-expiration-125673 | cert-expiration-125673 | jenkins | v1.33.1 | 17 Jul 24 20:23 UTC | 17 Jul 24 20:23 UTC |
| start | -p no-preload-385299 --memory=2200 | no-preload-385299 | jenkins | v1.33.1 | 17 Jul 24 20:23 UTC | 17 Jul 24 20:24 UTC |
| | --alsologtostderr --wait=true | | | | | |
| | --preload=false --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.31.0-beta.0 | | | | | |
| addons | enable metrics-server -p old-k8s-version-069806 | old-k8s-version-069806 | jenkins | v1.33.1 | 17 Jul 24 20:24 UTC | 17 Jul 24 20:24 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p old-k8s-version-069806 | old-k8s-version-069806 | jenkins | v1.33.1 | 17 Jul 24 20:24 UTC | 17 Jul 24 20:24 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p old-k8s-version-069806 | old-k8s-version-069806 | jenkins | v1.33.1 | 17 Jul 24 20:24 UTC | 17 Jul 24 20:24 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p old-k8s-version-069806 | old-k8s-version-069806 | jenkins | v1.33.1 | 17 Jul 24 20:24 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| addons | enable metrics-server -p no-preload-385299 | no-preload-385299 | jenkins | v1.33.1 | 17 Jul 24 20:25 UTC | 17 Jul 24 20:25 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p no-preload-385299 | no-preload-385299 | jenkins | v1.33.1 | 17 Jul 24 20:25 UTC | 17 Jul 24 20:25 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p no-preload-385299 | no-preload-385299 | jenkins | v1.33.1 | 17 Jul 24 20:25 UTC | 17 Jul 24 20:25 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p no-preload-385299 --memory=2200 | no-preload-385299 | jenkins | v1.33.1 | 17 Jul 24 20:25 UTC | 17 Jul 24 20:29 UTC |
| | --alsologtostderr --wait=true | | | | | |
| | --preload=false --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.31.0-beta.0 | | | | | |
| image | no-preload-385299 image list | no-preload-385299 | jenkins | v1.33.1 | 17 Jul 24 20:29 UTC | 17 Jul 24 20:29 UTC |
| | --format=json | | | | | |
| pause | -p no-preload-385299 | no-preload-385299 | jenkins | v1.33.1 | 17 Jul 24 20:29 UTC | 17 Jul 24 20:29 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p no-preload-385299 | no-preload-385299 | jenkins | v1.33.1 | 17 Jul 24 20:29 UTC | 17 Jul 24 20:29 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p no-preload-385299 | no-preload-385299 | jenkins | v1.33.1 | 17 Jul 24 20:29 UTC | 17 Jul 24 20:30 UTC |
| delete | -p no-preload-385299 | no-preload-385299 | jenkins | v1.33.1 | 17 Jul 24 20:30 UTC | 17 Jul 24 20:30 UTC |
| start | -p embed-certs-195036 | embed-certs-195036 | jenkins | v1.33.1 | 17 Jul 24 20:30 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --embed-certs --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.30.2 | | | | | |
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/07/17 20:30:02
Running on machine: ip-172-31-30-239
Binary: Built with gc go1.22.5 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0717 20:30:02.169976 944109 out.go:291] Setting OutFile to fd 1 ...
I0717 20:30:02.170170 944109 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 20:30:02.170199 944109 out.go:304] Setting ErrFile to fd 2...
I0717 20:30:02.170220 944109 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 20:30:02.170486 944109 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-720845/.minikube/bin
I0717 20:30:02.170962 944109 out.go:298] Setting JSON to false
I0717 20:30:02.172085 944109 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":15150,"bootTime":1721233052,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1064-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
I0717 20:30:02.172188 944109 start.go:139] virtualization:
I0717 20:30:02.174743 944109 out.go:177] * [embed-certs-195036] minikube v1.33.1 on Ubuntu 20.04 (arm64)
I0717 20:30:02.176867 944109 out.go:177] - MINIKUBE_LOCATION=19282
I0717 20:30:02.176995 944109 notify.go:220] Checking for updates...
I0717 20:30:02.180897 944109 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0717 20:30:02.182943 944109 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19282-720845/kubeconfig
I0717 20:30:02.184655 944109 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-720845/.minikube
I0717 20:30:02.186685 944109 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0717 20:30:02.188718 944109 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0717 20:30:02.191172 944109 config.go:182] Loaded profile config "old-k8s-version-069806": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0717 20:30:02.191315 944109 driver.go:392] Setting default libvirt URI to qemu:///system
I0717 20:30:02.228271 944109 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
I0717 20:30:02.228477 944109 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0717 20:30:02.304980 944109 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2024-07-17 20:30:02.281392764 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
I0717 20:30:02.305093 944109 docker.go:307] overlay module found
I0717 20:30:02.307431 944109 out.go:177] * Using the docker driver based on user configuration
I0717 20:29:59.232394 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:30:01.721527 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:30:02.309388 944109 start.go:297] selected driver: docker
I0717 20:30:02.309417 944109 start.go:901] validating driver "docker" against <nil>
I0717 20:30:02.309434 944109 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0717 20:30:02.310174 944109 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0717 20:30:02.372228 944109 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:53 SystemTime:2024-07-17 20:30:02.36278293 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
I0717 20:30:02.372393 944109 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0717 20:30:02.372622 944109 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0717 20:30:02.374731 944109 out.go:177] * Using Docker driver with root privileges
I0717 20:30:02.376559 944109 cni.go:84] Creating CNI manager for ""
I0717 20:30:02.376582 944109 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0717 20:30:02.376591 944109 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
I0717 20:30:02.376685 944109 start.go:340] cluster config:
{Name:embed-certs-195036 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:embed-certs-195036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0717 20:30:02.378795 944109 out.go:177] * Starting "embed-certs-195036" primary control-plane node in "embed-certs-195036" cluster
I0717 20:30:02.380895 944109 cache.go:121] Beginning downloading kic base image for docker with containerd
I0717 20:30:02.382800 944109 out.go:177] * Pulling base image v0.0.44-1721234491-19282 ...
I0717 20:30:02.384883 944109 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime containerd
I0717 20:30:02.384942 944109 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19282-720845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-containerd-overlay2-arm64.tar.lz4
I0717 20:30:02.384956 944109 cache.go:56] Caching tarball of preloaded images
I0717 20:30:02.384954 944109 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 in local docker daemon
I0717 20:30:02.385038 944109 preload.go:172] Found /home/jenkins/minikube-integration/19282-720845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0717 20:30:02.385049 944109 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on containerd
I0717 20:30:02.385157 944109 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/embed-certs-195036/config.json ...
I0717 20:30:02.385185 944109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/embed-certs-195036/config.json: {Name:mkf90a5fa2b55bace698c558a372a10c21691737 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
W0717 20:30:02.403735 944109 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 is of wrong architecture
I0717 20:30:02.403760 944109 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 to local cache
I0717 20:30:02.403859 944109 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 in local cache directory
I0717 20:30:02.403883 944109 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 in local cache directory, skipping pull
I0717 20:30:02.403888 944109 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 exists in cache, skipping pull
I0717 20:30:02.403899 944109 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 as a tarball
I0717 20:30:02.403905 944109 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 from local cache
I0717 20:30:02.519343 944109 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 from cached tarball
I0717 20:30:02.519379 944109 cache.go:194] Successfully downloaded all kic artifacts
I0717 20:30:02.519419 944109 start.go:360] acquireMachinesLock for embed-certs-195036: {Name:mkdf09256bae7a80bad05ba6bb7a7384a3eb96c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0717 20:30:02.520092 944109 start.go:364] duration metric: took 650.791µs to acquireMachinesLock for "embed-certs-195036"
I0717 20:30:02.520138 944109 start.go:93] Provisioning new machine with config: &{Name:embed-certs-195036 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:embed-certs-195036 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0717 20:30:02.520231 944109 start.go:125] createHost starting for "" (driver="docker")
I0717 20:30:02.524639 944109 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
I0717 20:30:02.524961 944109 start.go:159] libmachine.API.Create for "embed-certs-195036" (driver="docker")
I0717 20:30:02.525008 944109 client.go:168] LocalClient.Create starting
I0717 20:30:02.525100 944109 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19282-720845/.minikube/certs/ca.pem
I0717 20:30:02.525142 944109 main.go:141] libmachine: Decoding PEM data...
I0717 20:30:02.525165 944109 main.go:141] libmachine: Parsing certificate...
I0717 20:30:02.525236 944109 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19282-720845/.minikube/certs/cert.pem
I0717 20:30:02.525278 944109 main.go:141] libmachine: Decoding PEM data...
I0717 20:30:02.525294 944109 main.go:141] libmachine: Parsing certificate...
I0717 20:30:02.525725 944109 cli_runner.go:164] Run: docker network inspect embed-certs-195036 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0717 20:30:02.542943 944109 cli_runner.go:211] docker network inspect embed-certs-195036 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0717 20:30:02.543052 944109 network_create.go:284] running [docker network inspect embed-certs-195036] to gather additional debugging logs...
I0717 20:30:02.543084 944109 cli_runner.go:164] Run: docker network inspect embed-certs-195036
W0717 20:30:02.558188 944109 cli_runner.go:211] docker network inspect embed-certs-195036 returned with exit code 1
I0717 20:30:02.558223 944109 network_create.go:287] error running [docker network inspect embed-certs-195036]: docker network inspect embed-certs-195036: exit status 1
stdout:
[]
stderr:
Error response from daemon: network embed-certs-195036 not found
I0717 20:30:02.558237 944109 network_create.go:289] output of [docker network inspect embed-certs-195036]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network embed-certs-195036 not found
** /stderr **
I0717 20:30:02.558335 944109 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0717 20:30:02.575032 944109 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-33e3147793ea IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:d4:90:4a:d8} reservation:<nil>}
I0717 20:30:02.575600 944109 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-798078d9c1d4 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:fc:ef:a4:0a} reservation:<nil>}
I0717 20:30:02.576349 944109 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8f81f57b47a2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:70:53:0f:59} reservation:<nil>}
I0717 20:30:02.576742 944109 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-6269ce15976b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:34:64:3b:70} reservation:<nil>}
I0717 20:30:02.577303 944109 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018b2000}
I0717 20:30:02.577352 944109 network_create.go:124] attempt to create docker network embed-certs-195036 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
I0717 20:30:02.577436 944109 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-195036 embed-certs-195036
I0717 20:30:02.652546 944109 network_create.go:108] docker network embed-certs-195036 192.168.85.0/24 created
I0717 20:30:02.652580 944109 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-195036" container
I0717 20:30:02.652655 944109 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0717 20:30:02.668571 944109 cli_runner.go:164] Run: docker volume create embed-certs-195036 --label name.minikube.sigs.k8s.io=embed-certs-195036 --label created_by.minikube.sigs.k8s.io=true
I0717 20:30:02.686145 944109 oci.go:103] Successfully created a docker volume embed-certs-195036
I0717 20:30:02.686237 944109 cli_runner.go:164] Run: docker run --rm --name embed-certs-195036-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-195036 --entrypoint /usr/bin/test -v embed-certs-195036:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 -d /var/lib
I0717 20:30:03.335708 944109 oci.go:107] Successfully prepared a docker volume embed-certs-195036
I0717 20:30:03.335757 944109 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime containerd
I0717 20:30:03.335778 944109 kic.go:194] Starting extracting preloaded images to volume ...
I0717 20:30:03.335860 944109 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19282-720845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-195036:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 -I lz4 -xf /preloaded.tar -C /extractDir
I0717 20:30:04.220526 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:30:06.220767 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:30:09.277556 944109 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19282-720845/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-195036:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 -I lz4 -xf /preloaded.tar -C /extractDir: (5.94165205s)
I0717 20:30:09.277587 944109 kic.go:203] duration metric: took 5.941805868s to extract preloaded images to volume ...
W0717 20:30:09.277716 944109 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0717 20:30:09.277818 944109 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0717 20:30:09.328684 944109 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-195036 --name embed-certs-195036 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-195036 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-195036 --network embed-certs-195036 --ip 192.168.85.2 --volume embed-certs-195036:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2
I0717 20:30:09.711866 944109 cli_runner.go:164] Run: docker container inspect embed-certs-195036 --format={{.State.Running}}
I0717 20:30:09.737804 944109 cli_runner.go:164] Run: docker container inspect embed-certs-195036 --format={{.State.Status}}
I0717 20:30:09.756847 944109 cli_runner.go:164] Run: docker exec embed-certs-195036 stat /var/lib/dpkg/alternatives/iptables
I0717 20:30:09.833532 944109 oci.go:144] the created container "embed-certs-195036" has a running status.
I0717 20:30:09.833560 944109 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19282-720845/.minikube/machines/embed-certs-195036/id_rsa...
I0717 20:30:10.834162 944109 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19282-720845/.minikube/machines/embed-certs-195036/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0717 20:30:10.857769 944109 cli_runner.go:164] Run: docker container inspect embed-certs-195036 --format={{.State.Status}}
I0717 20:30:10.878824 944109 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0717 20:30:10.878842 944109 kic_runner.go:114] Args: [docker exec --privileged embed-certs-195036 chown docker:docker /home/docker/.ssh/authorized_keys]
I0717 20:30:10.934276 944109 cli_runner.go:164] Run: docker container inspect embed-certs-195036 --format={{.State.Status}}
I0717 20:30:10.953202 944109 machine.go:94] provisionDockerMachine start ...
I0717 20:30:10.953322 944109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-195036
I0717 20:30:10.976214 944109 main.go:141] libmachine: Using SSH client type: native
I0717 20:30:10.976494 944109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2c70] 0x3e54d0 <nil> [] 0s} 127.0.0.1 33834 <nil> <nil>}
I0717 20:30:10.976504 944109 main.go:141] libmachine: About to run SSH command:
hostname
I0717 20:30:11.108928 944109 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-195036
I0717 20:30:11.109007 944109 ubuntu.go:169] provisioning hostname "embed-certs-195036"
I0717 20:30:11.109131 944109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-195036
I0717 20:30:11.137873 944109 main.go:141] libmachine: Using SSH client type: native
I0717 20:30:11.138107 944109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2c70] 0x3e54d0 <nil> [] 0s} 127.0.0.1 33834 <nil> <nil>}
I0717 20:30:11.138134 944109 main.go:141] libmachine: About to run SSH command:
sudo hostname embed-certs-195036 && echo "embed-certs-195036" | sudo tee /etc/hostname
I0717 20:30:11.298277 944109 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-195036
I0717 20:30:11.298365 944109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-195036
I0717 20:30:11.322849 944109 main.go:141] libmachine: Using SSH client type: native
I0717 20:30:11.323105 944109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2c70] 0x3e54d0 <nil> [] 0s} 127.0.0.1 33834 <nil> <nil>}
I0717 20:30:11.323124 944109 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sembed-certs-195036' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-195036/g' /etc/hosts;
else
echo '127.0.1.1 embed-certs-195036' | sudo tee -a /etc/hosts;
fi
fi
I0717 20:30:11.452188 944109 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0717 20:30:11.452217 944109 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19282-720845/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-720845/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-720845/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-720845/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-720845/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-720845/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-720845/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-720845/.minikube}
I0717 20:30:11.452243 944109 ubuntu.go:177] setting up certificates
I0717 20:30:11.452254 944109 provision.go:84] configureAuth start
I0717 20:30:11.452311 944109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-195036
I0717 20:30:11.468746 944109 provision.go:143] copyHostCerts
I0717 20:30:11.468818 944109 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-720845/.minikube/cert.pem, removing ...
I0717 20:30:11.468832 944109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-720845/.minikube/cert.pem
I0717 20:30:11.468909 944109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-720845/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-720845/.minikube/cert.pem (1123 bytes)
I0717 20:30:11.469019 944109 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-720845/.minikube/key.pem, removing ...
I0717 20:30:11.469030 944109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-720845/.minikube/key.pem
I0717 20:30:11.469063 944109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-720845/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-720845/.minikube/key.pem (1679 bytes)
I0717 20:30:11.469130 944109 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-720845/.minikube/ca.pem, removing ...
I0717 20:30:11.469146 944109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-720845/.minikube/ca.pem
I0717 20:30:11.469174 944109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-720845/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-720845/.minikube/ca.pem (1078 bytes)
I0717 20:30:11.469230 944109 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-720845/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-720845/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-720845/.minikube/certs/ca-key.pem org=jenkins.embed-certs-195036 san=[127.0.0.1 192.168.85.2 embed-certs-195036 localhost minikube]
I0717 20:30:08.719491 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:30:10.735195 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:30:12.436568 944109 provision.go:177] copyRemoteCerts
I0717 20:30:12.436644 944109 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0717 20:30:12.436699 944109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-195036
I0717 20:30:12.458693 944109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/19282-720845/.minikube/machines/embed-certs-195036/id_rsa Username:docker}
I0717 20:30:12.553313 944109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-720845/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0717 20:30:12.580369 944109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-720845/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I0717 20:30:12.607623 944109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-720845/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0717 20:30:12.632690 944109 provision.go:87] duration metric: took 1.180421949s to configureAuth
I0717 20:30:12.632734 944109 ubuntu.go:193] setting minikube options for container-runtime
I0717 20:30:12.632920 944109 config.go:182] Loaded profile config "embed-certs-195036": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0717 20:30:12.632935 944109 machine.go:97] duration metric: took 1.679683569s to provisionDockerMachine
I0717 20:30:12.632943 944109 client.go:171] duration metric: took 10.107925526s to LocalClient.Create
I0717 20:30:12.632964 944109 start.go:167] duration metric: took 10.108004827s to libmachine.API.Create "embed-certs-195036"
I0717 20:30:12.632976 944109 start.go:293] postStartSetup for "embed-certs-195036" (driver="docker")
I0717 20:30:12.632985 944109 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0717 20:30:12.633067 944109 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0717 20:30:12.633114 944109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-195036
I0717 20:30:12.651447 944109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/19282-720845/.minikube/machines/embed-certs-195036/id_rsa Username:docker}
I0717 20:30:12.741717 944109 ssh_runner.go:195] Run: cat /etc/os-release
I0717 20:30:12.745114 944109 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0717 20:30:12.745154 944109 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0717 20:30:12.745167 944109 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0717 20:30:12.745175 944109 info.go:137] Remote host: Ubuntu 22.04.4 LTS
I0717 20:30:12.745185 944109 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-720845/.minikube/addons for local assets ...
I0717 20:30:12.745247 944109 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-720845/.minikube/files for local assets ...
I0717 20:30:12.745330 944109 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-720845/.minikube/files/etc/ssl/certs/7262252.pem -> 7262252.pem in /etc/ssl/certs
I0717 20:30:12.745439 944109 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0717 20:30:12.754826 944109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-720845/.minikube/files/etc/ssl/certs/7262252.pem --> /etc/ssl/certs/7262252.pem (1708 bytes)
I0717 20:30:12.781309 944109 start.go:296] duration metric: took 148.317426ms for postStartSetup
I0717 20:30:12.781687 944109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-195036
I0717 20:30:12.798563 944109 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/embed-certs-195036/config.json ...
I0717 20:30:12.798853 944109 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0717 20:30:12.798905 944109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-195036
I0717 20:30:12.816133 944109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/19282-720845/.minikube/machines/embed-certs-195036/id_rsa Username:docker}
I0717 20:30:12.905233 944109 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0717 20:30:12.909883 944109 start.go:128] duration metric: took 10.389635186s to createHost
I0717 20:30:12.909923 944109 start.go:83] releasing machines lock for "embed-certs-195036", held for 10.38979739s
I0717 20:30:12.910011 944109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-195036
I0717 20:30:12.926899 944109 ssh_runner.go:195] Run: cat /version.json
I0717 20:30:12.926968 944109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-195036
I0717 20:30:12.926909 944109 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0717 20:30:12.927133 944109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-195036
I0717 20:30:12.954882 944109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/19282-720845/.minikube/machines/embed-certs-195036/id_rsa Username:docker}
I0717 20:30:12.964192 944109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/19282-720845/.minikube/machines/embed-certs-195036/id_rsa Username:docker}
I0717 20:30:13.180380 944109 ssh_runner.go:195] Run: systemctl --version
I0717 20:30:13.185251 944109 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0717 20:30:13.189748 944109 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0717 20:30:13.215874 944109 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0717 20:30:13.215950 944109 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0717 20:30:13.247822 944109 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0717 20:30:13.247850 944109 start.go:495] detecting cgroup driver to use...
I0717 20:30:13.247884 944109 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0717 20:30:13.247934 944109 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0717 20:30:13.261126 944109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0717 20:30:13.273383 944109 docker.go:217] disabling cri-docker service (if available) ...
I0717 20:30:13.273462 944109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0717 20:30:13.288516 944109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0717 20:30:13.304151 944109 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0717 20:30:13.407592 944109 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0717 20:30:13.508232 944109 docker.go:233] disabling docker service ...
I0717 20:30:13.508305 944109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0717 20:30:13.532015 944109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0717 20:30:13.545721 944109 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0717 20:30:13.640821 944109 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0717 20:30:13.738355 944109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0717 20:30:13.752670 944109 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0717 20:30:13.771808 944109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0717 20:30:13.782770 944109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0717 20:30:13.794278 944109 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0717 20:30:13.794373 944109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0717 20:30:13.805625 944109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0717 20:30:13.816834 944109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0717 20:30:13.828908 944109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0717 20:30:13.840221 944109 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0717 20:30:13.852126 944109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0717 20:30:13.863254 944109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0717 20:30:13.873406 944109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0717 20:30:13.885627 944109 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0717 20:30:13.894997 944109 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0717 20:30:13.904801 944109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0717 20:30:14.007855 944109 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0717 20:30:14.174806 944109 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0717 20:30:14.174877 944109 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0717 20:30:14.178919 944109 start.go:563] Will wait 60s for crictl version
I0717 20:30:14.179033 944109 ssh_runner.go:195] Run: which crictl
I0717 20:30:14.182871 944109 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0717 20:30:14.244419 944109 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.19
RuntimeApiVersion: v1
I0717 20:30:14.244598 944109 ssh_runner.go:195] Run: containerd --version
I0717 20:30:14.276114 944109 ssh_runner.go:195] Run: containerd --version
I0717 20:30:14.324603 944109 out.go:177] * Preparing Kubernetes v1.30.2 on containerd 1.7.19 ...
I0717 20:30:14.326822 944109 cli_runner.go:164] Run: docker network inspect embed-certs-195036 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0717 20:30:14.348252 944109 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I0717 20:30:14.352451 944109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0717 20:30:14.369851 944109 kubeadm.go:883] updating cluster {Name:embed-certs-195036 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:embed-certs-195036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0717 20:30:14.369973 944109 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime containerd
I0717 20:30:14.370047 944109 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 20:30:14.414464 944109 containerd.go:627] all images are preloaded for containerd runtime.
I0717 20:30:14.414486 944109 containerd.go:534] Images already preloaded, skipping extraction
I0717 20:30:14.414546 944109 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 20:30:14.466960 944109 containerd.go:627] all images are preloaded for containerd runtime.
I0717 20:30:14.467033 944109 cache_images.go:84] Images are preloaded, skipping loading
I0717 20:30:14.467107 944109 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.30.2 containerd true true} ...
I0717 20:30:14.467243 944109 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-195036 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.30.2 ClusterName:embed-certs-195036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0717 20:30:14.467360 944109 ssh_runner.go:195] Run: sudo crictl info
I0717 20:30:14.520577 944109 cni.go:84] Creating CNI manager for ""
I0717 20:30:14.520601 944109 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0717 20:30:14.520612 944109 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0717 20:30:14.520635 944109 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-195036 NodeName:embed-certs-195036 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0717 20:30:14.520796 944109 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "embed-certs-195036"
kubeletExtraArgs:
node-ip: 192.168.85.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.30.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0717 20:30:14.520864 944109 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
I0717 20:30:14.535470 944109 binaries.go:44] Found k8s binaries, skipping transfer
I0717 20:30:14.535593 944109 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0717 20:30:14.546236 944109 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
I0717 20:30:14.567648 944109 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0717 20:30:14.587806 944109 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
I0717 20:30:14.610963 944109 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I0717 20:30:14.615312 944109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0717 20:30:14.629782 944109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0717 20:30:14.746845 944109 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0717 20:30:14.765440 944109 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/embed-certs-195036 for IP: 192.168.85.2
I0717 20:30:14.765463 944109 certs.go:194] generating shared ca certs ...
I0717 20:30:14.765479 944109 certs.go:226] acquiring lock for ca certs: {Name:mk70fd46ee08fce14a9e7548fea7cc8fad7ae6a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 20:30:14.765622 944109 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-720845/.minikube/ca.key
I0717 20:30:14.765691 944109 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-720845/.minikube/proxy-client-ca.key
I0717 20:30:14.765711 944109 certs.go:256] generating profile certs ...
I0717 20:30:14.765778 944109 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/embed-certs-195036/client.key
I0717 20:30:14.765796 944109 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/embed-certs-195036/client.crt with IP's: []
I0717 20:30:15.177450 944109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/embed-certs-195036/client.crt ...
I0717 20:30:15.177483 944109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/embed-certs-195036/client.crt: {Name:mk36526bc07c23bff0b4ba0f46ec81e730c96fe5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 20:30:15.177722 944109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/embed-certs-195036/client.key ...
I0717 20:30:15.177739 944109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/embed-certs-195036/client.key: {Name:mkb03bc1dbaae543439b49d7d61a5c29e6fae779 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 20:30:15.178441 944109 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/embed-certs-195036/apiserver.key.59c026f1
I0717 20:30:15.178472 944109 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/embed-certs-195036/apiserver.crt.59c026f1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
I0717 20:30:16.447894 944109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/embed-certs-195036/apiserver.crt.59c026f1 ...
I0717 20:30:16.447966 944109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/embed-certs-195036/apiserver.crt.59c026f1: {Name:mkba142375c3669b5e44cf5a4a3e7c0f294f706c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 20:30:16.448239 944109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/embed-certs-195036/apiserver.key.59c026f1 ...
I0717 20:30:16.448282 944109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/embed-certs-195036/apiserver.key.59c026f1: {Name:mk1ad5161842aedf8b105010d74f2bc4f5d5a044 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 20:30:16.451581 944109 certs.go:381] copying /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/embed-certs-195036/apiserver.crt.59c026f1 -> /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/embed-certs-195036/apiserver.crt
I0717 20:30:16.451769 944109 certs.go:385] copying /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/embed-certs-195036/apiserver.key.59c026f1 -> /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/embed-certs-195036/apiserver.key
I0717 20:30:16.451928 944109 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/embed-certs-195036/proxy-client.key
I0717 20:30:16.451970 944109 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/embed-certs-195036/proxy-client.crt with IP's: []
I0717 20:30:16.863000 944109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/embed-certs-195036/proxy-client.crt ...
I0717 20:30:16.863031 944109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/embed-certs-195036/proxy-client.crt: {Name:mkb14fb6500ee2b989200068c1b37beb96a4e274 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 20:30:16.863245 944109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/embed-certs-195036/proxy-client.key ...
I0717 20:30:16.863261 944109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/embed-certs-195036/proxy-client.key: {Name:mked403344538e10d16a33c16c7107f6cf25ff72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 20:30:16.863457 944109 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-720845/.minikube/certs/726225.pem (1338 bytes)
W0717 20:30:16.863501 944109 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-720845/.minikube/certs/726225_empty.pem, impossibly tiny 0 bytes
I0717 20:30:16.863515 944109 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-720845/.minikube/certs/ca-key.pem (1675 bytes)
I0717 20:30:16.863541 944109 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-720845/.minikube/certs/ca.pem (1078 bytes)
I0717 20:30:16.863576 944109 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-720845/.minikube/certs/cert.pem (1123 bytes)
I0717 20:30:16.863604 944109 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-720845/.minikube/certs/key.pem (1679 bytes)
I0717 20:30:16.863656 944109 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-720845/.minikube/files/etc/ssl/certs/7262252.pem (1708 bytes)
I0717 20:30:16.864322 944109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-720845/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0717 20:30:16.891732 944109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-720845/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0717 20:30:16.917650 944109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-720845/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0717 20:30:16.943208 944109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-720845/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0717 20:30:16.968837 944109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/embed-certs-195036/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
I0717 20:30:16.994005 944109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/embed-certs-195036/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0717 20:30:17.022840 944109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/embed-certs-195036/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0717 20:30:17.049594 944109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-720845/.minikube/profiles/embed-certs-195036/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0717 20:30:17.075769 944109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-720845/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0717 20:30:17.107793 944109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-720845/.minikube/certs/726225.pem --> /usr/share/ca-certificates/726225.pem (1338 bytes)
I0717 20:30:17.136324 944109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-720845/.minikube/files/etc/ssl/certs/7262252.pem --> /usr/share/ca-certificates/7262252.pem (1708 bytes)
I0717 20:30:17.166965 944109 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0717 20:30:13.221116 933934 pod_ready.go:102] pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace has status "Ready":"False"
I0717 20:30:14.213455 933934 pod_ready.go:81] duration metric: took 4m0.000039803s for pod "metrics-server-9975d5f86-v4sfl" in "kube-system" namespace to be "Ready" ...
E0717 20:30:14.213486 933934 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
I0717 20:30:14.213497 933934 pod_ready.go:38] duration metric: took 5m27.099755562s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0717 20:30:14.213513 933934 api_server.go:52] waiting for apiserver process to appear ...
I0717 20:30:14.213547 933934 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0717 20:30:14.213608 933934 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0717 20:30:14.267329 933934 cri.go:89] found id: "cd37e487a62960309ed82922e0a8a525350123f65c49e5f04cfa975054d7d21c"
I0717 20:30:14.267350 933934 cri.go:89] found id: "f910fbeca2eebc637c9085897f65b8f1c51713acff4ad29cfb0075f1f3ec10bc"
I0717 20:30:14.267356 933934 cri.go:89] found id: ""
I0717 20:30:14.267364 933934 logs.go:276] 2 containers: [cd37e487a62960309ed82922e0a8a525350123f65c49e5f04cfa975054d7d21c f910fbeca2eebc637c9085897f65b8f1c51713acff4ad29cfb0075f1f3ec10bc]
I0717 20:30:14.267435 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:14.279495 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:14.283694 933934 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0717 20:30:14.283767 933934 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0717 20:30:14.332584 933934 cri.go:89] found id: "687abee290206c4efc81f20fc7a1ed7bbd6be9374ad6bbb86d674109b406be0c"
I0717 20:30:14.332625 933934 cri.go:89] found id: "e570d714588a47af78dc81d95545a2a3f0ac0ae3acb938112dc2a7230b0a49b1"
I0717 20:30:14.332631 933934 cri.go:89] found id: ""
I0717 20:30:14.332639 933934 logs.go:276] 2 containers: [687abee290206c4efc81f20fc7a1ed7bbd6be9374ad6bbb86d674109b406be0c e570d714588a47af78dc81d95545a2a3f0ac0ae3acb938112dc2a7230b0a49b1]
I0717 20:30:14.332701 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:14.337032 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:14.341616 933934 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0717 20:30:14.341702 933934 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0717 20:30:14.416560 933934 cri.go:89] found id: "6dc999a35d5b7cabe60ac89e4d0eeda4c16d4bdf70c86b94f130a692166a53d4"
I0717 20:30:14.416585 933934 cri.go:89] found id: "03f122fd4abc96b4162eaa9b0110afa04333b847fc74cafe0a68ca5d30990294"
I0717 20:30:14.416590 933934 cri.go:89] found id: ""
I0717 20:30:14.416597 933934 logs.go:276] 2 containers: [6dc999a35d5b7cabe60ac89e4d0eeda4c16d4bdf70c86b94f130a692166a53d4 03f122fd4abc96b4162eaa9b0110afa04333b847fc74cafe0a68ca5d30990294]
I0717 20:30:14.416653 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:14.420874 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:14.426407 933934 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0717 20:30:14.426493 933934 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0717 20:30:14.474023 933934 cri.go:89] found id: "d6de24a696fe6c467e60ebcdf508df032838133f4f19467258e96b7f9bfaaf75"
I0717 20:30:14.474043 933934 cri.go:89] found id: "bcdcb609ea0d91709769d3d302be011074525901ec1c642b6b5f14c7eb1217ef"
I0717 20:30:14.474048 933934 cri.go:89] found id: ""
I0717 20:30:14.474055 933934 logs.go:276] 2 containers: [d6de24a696fe6c467e60ebcdf508df032838133f4f19467258e96b7f9bfaaf75 bcdcb609ea0d91709769d3d302be011074525901ec1c642b6b5f14c7eb1217ef]
I0717 20:30:14.474115 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:14.479393 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:14.483562 933934 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0717 20:30:14.483645 933934 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0717 20:30:14.546766 933934 cri.go:89] found id: "4157f24e3104cf922fe5deb02fbefd7da3728b0bdc290a7c09353bca677fd608"
I0717 20:30:14.546789 933934 cri.go:89] found id: "3a29c0026d94090fe1242ea0503bd2f78ae5aa0004fecb1cea584a49f5ddc1f4"
I0717 20:30:14.546794 933934 cri.go:89] found id: ""
I0717 20:30:14.546801 933934 logs.go:276] 2 containers: [4157f24e3104cf922fe5deb02fbefd7da3728b0bdc290a7c09353bca677fd608 3a29c0026d94090fe1242ea0503bd2f78ae5aa0004fecb1cea584a49f5ddc1f4]
I0717 20:30:14.546858 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:14.551041 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:14.555838 933934 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0717 20:30:14.555914 933934 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0717 20:30:14.607756 933934 cri.go:89] found id: "87f12b148cc5c4d4ab511a09ea624cefb8ca24845ab582cb5171cac706cfab08"
I0717 20:30:14.607786 933934 cri.go:89] found id: "fdcb50d15d02756e4b375373cc89aee7407bf6f49813ed607f9a66e11be2c02f"
I0717 20:30:14.607793 933934 cri.go:89] found id: ""
I0717 20:30:14.607800 933934 logs.go:276] 2 containers: [87f12b148cc5c4d4ab511a09ea624cefb8ca24845ab582cb5171cac706cfab08 fdcb50d15d02756e4b375373cc89aee7407bf6f49813ed607f9a66e11be2c02f]
I0717 20:30:14.607870 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:14.611951 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:14.615691 933934 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0717 20:30:14.615756 933934 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0717 20:30:14.665764 933934 cri.go:89] found id: "1f4da85e269235a6b1b8eba0cc55aa66708531b2d77a77c6fd4c1b3d649ff874"
I0717 20:30:14.665792 933934 cri.go:89] found id: "62bdf5d2d244d2f8c5398ebc927cf85a2d9e82aa5dae832ea5ce7dd7bf6e863b"
I0717 20:30:14.665797 933934 cri.go:89] found id: ""
I0717 20:30:14.665805 933934 logs.go:276] 2 containers: [1f4da85e269235a6b1b8eba0cc55aa66708531b2d77a77c6fd4c1b3d649ff874 62bdf5d2d244d2f8c5398ebc927cf85a2d9e82aa5dae832ea5ce7dd7bf6e863b]
I0717 20:30:14.665859 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:14.672728 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:14.676956 933934 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0717 20:30:14.677042 933934 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0717 20:30:14.739516 933934 cri.go:89] found id: "57b729d79f17dbcfa3bb0c46429504d8399588e217c6bd8ca2e38947ff5d4cf0"
I0717 20:30:14.739541 933934 cri.go:89] found id: "89d02f9d8a4af9b6c7a8ebd5d1b7850c32625b8302d57fba3cc86f4f4b3513f5"
I0717 20:30:14.739562 933934 cri.go:89] found id: ""
I0717 20:30:14.739570 933934 logs.go:276] 2 containers: [57b729d79f17dbcfa3bb0c46429504d8399588e217c6bd8ca2e38947ff5d4cf0 89d02f9d8a4af9b6c7a8ebd5d1b7850c32625b8302d57fba3cc86f4f4b3513f5]
I0717 20:30:14.739628 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:14.743476 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:14.748788 933934 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0717 20:30:14.748859 933934 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0717 20:30:14.825367 933934 cri.go:89] found id: "44895de2582221beb2740dcf68a96bd5aab3c26b61e7fd33bace4c9538e555f1"
I0717 20:30:14.825454 933934 cri.go:89] found id: ""
I0717 20:30:14.825477 933934 logs.go:276] 1 containers: [44895de2582221beb2740dcf68a96bd5aab3c26b61e7fd33bace4c9538e555f1]
I0717 20:30:14.825567 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:14.829469 933934 logs.go:123] Gathering logs for containerd ...
I0717 20:30:14.829492 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0717 20:30:14.951471 933934 logs.go:123] Gathering logs for container status ...
I0717 20:30:14.951551 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0717 20:30:15.002401 933934 logs.go:123] Gathering logs for describe nodes ...
I0717 20:30:15.002553 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0717 20:30:15.257508 933934 logs.go:123] Gathering logs for kube-apiserver [f910fbeca2eebc637c9085897f65b8f1c51713acff4ad29cfb0075f1f3ec10bc] ...
I0717 20:30:15.257583 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f910fbeca2eebc637c9085897f65b8f1c51713acff4ad29cfb0075f1f3ec10bc"
I0717 20:30:15.326653 933934 logs.go:123] Gathering logs for etcd [687abee290206c4efc81f20fc7a1ed7bbd6be9374ad6bbb86d674109b406be0c] ...
I0717 20:30:15.326685 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 687abee290206c4efc81f20fc7a1ed7bbd6be9374ad6bbb86d674109b406be0c"
I0717 20:30:15.396670 933934 logs.go:123] Gathering logs for etcd [e570d714588a47af78dc81d95545a2a3f0ac0ae3acb938112dc2a7230b0a49b1] ...
I0717 20:30:15.396750 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e570d714588a47af78dc81d95545a2a3f0ac0ae3acb938112dc2a7230b0a49b1"
I0717 20:30:15.473355 933934 logs.go:123] Gathering logs for kube-controller-manager [fdcb50d15d02756e4b375373cc89aee7407bf6f49813ed607f9a66e11be2c02f] ...
I0717 20:30:15.473711 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdcb50d15d02756e4b375373cc89aee7407bf6f49813ed607f9a66e11be2c02f"
I0717 20:30:15.552525 933934 logs.go:123] Gathering logs for storage-provisioner [57b729d79f17dbcfa3bb0c46429504d8399588e217c6bd8ca2e38947ff5d4cf0] ...
I0717 20:30:15.552603 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57b729d79f17dbcfa3bb0c46429504d8399588e217c6bd8ca2e38947ff5d4cf0"
I0717 20:30:15.637695 933934 logs.go:123] Gathering logs for kindnet [62bdf5d2d244d2f8c5398ebc927cf85a2d9e82aa5dae832ea5ce7dd7bf6e863b] ...
I0717 20:30:15.637726 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62bdf5d2d244d2f8c5398ebc927cf85a2d9e82aa5dae832ea5ce7dd7bf6e863b"
I0717 20:30:15.746665 933934 logs.go:123] Gathering logs for kubelet ...
I0717 20:30:15.746710 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0717 20:30:15.806130 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:47 old-k8s-version-069806 kubelet[662]: E0717 20:24:47.042220 662 reflector.go:138] object-"kube-system"/"kube-proxy-token-7976r": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-7976r" is forbidden: User "system:node:old-k8s-version-069806" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-069806' and this object
W0717 20:30:15.806356 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:47 old-k8s-version-069806 kubelet[662]: E0717 20:24:47.082566 662 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-069806" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-069806' and this object
W0717 20:30:15.806573 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:47 old-k8s-version-069806 kubelet[662]: E0717 20:24:47.082721 662 reflector.go:138] object-"kube-system"/"coredns-token-mrwzx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-mrwzx" is forbidden: User "system:node:old-k8s-version-069806" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-069806' and this object
W0717 20:30:15.806781 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:47 old-k8s-version-069806 kubelet[662]: E0717 20:24:47.082802 662 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-069806" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-069806' and this object
W0717 20:30:15.807012 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:47 old-k8s-version-069806 kubelet[662]: E0717 20:24:47.082891 662 reflector.go:138] object-"kube-system"/"kindnet-token-g6tv7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-g6tv7" is forbidden: User "system:node:old-k8s-version-069806" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-069806' and this object
W0717 20:30:15.810861 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:47 old-k8s-version-069806 kubelet[662]: E0717 20:24:47.202724 662 reflector.go:138] object-"kube-system"/"metrics-server-token-fksnv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-fksnv" is forbidden: User "system:node:old-k8s-version-069806" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-069806' and this object
W0717 20:30:15.811163 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:47 old-k8s-version-069806 kubelet[662]: E0717 20:24:47.202810 662 reflector.go:138] object-"kube-system"/"storage-provisioner-token-rtf2k": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-rtf2k" is forbidden: User "system:node:old-k8s-version-069806" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-069806' and this object
W0717 20:30:15.811378 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:47 old-k8s-version-069806 kubelet[662]: E0717 20:24:47.202881 662 reflector.go:138] object-"default"/"default-token-9ftpp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-9ftpp" is forbidden: User "system:node:old-k8s-version-069806" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-069806' and this object
W0717 20:30:15.819326 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:49 old-k8s-version-069806 kubelet[662]: E0717 20:24:49.183838 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:30:15.820329 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:50 old-k8s-version-069806 kubelet[662]: E0717 20:24:50.150410 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.823214 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:01 old-k8s-version-069806 kubelet[662]: E0717 20:25:01.900030 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:30:15.825532 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:11 old-k8s-version-069806 kubelet[662]: E0717 20:25:11.253734 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.825892 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:12 old-k8s-version-069806 kubelet[662]: E0717 20:25:12.256602 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.826084 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:12 old-k8s-version-069806 kubelet[662]: E0717 20:25:12.874626 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.826439 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:16 old-k8s-version-069806 kubelet[662]: E0717 20:25:16.266437 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.827342 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:20 old-k8s-version-069806 kubelet[662]: E0717 20:25:20.299236 662 pod_workers.go:191] Error syncing pod b733c4a6-f6de-426d-86c9-67948261d437 ("storage-provisioner_kube-system(b733c4a6-f6de-426d-86c9-67948261d437)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b733c4a6-f6de-426d-86c9-67948261d437)"
W0717 20:30:15.830056 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:27 old-k8s-version-069806 kubelet[662]: E0717 20:25:27.898396 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:30:15.831034 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:30 old-k8s-version-069806 kubelet[662]: E0717 20:25:30.313468 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.831554 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:36 old-k8s-version-069806 kubelet[662]: E0717 20:25:36.266512 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.831804 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:40 old-k8s-version-069806 kubelet[662]: E0717 20:25:40.874479 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.832142 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:51 old-k8s-version-069806 kubelet[662]: E0717 20:25:51.875881 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.832612 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:53 old-k8s-version-069806 kubelet[662]: E0717 20:25:53.400984 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.832942 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:56 old-k8s-version-069806 kubelet[662]: E0717 20:25:56.266353 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.833146 933934 logs.go:138] Found kubelet problem: Jul 17 20:26:03 old-k8s-version-069806 kubelet[662]: E0717 20:26:03.874232 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.833523 933934 logs.go:138] Found kubelet problem: Jul 17 20:26:09 old-k8s-version-069806 kubelet[662]: E0717 20:26:09.873806 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.836067 933934 logs.go:138] Found kubelet problem: Jul 17 20:26:17 old-k8s-version-069806 kubelet[662]: E0717 20:26:17.885290 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:30:15.836428 933934 logs.go:138] Found kubelet problem: Jul 17 20:26:21 old-k8s-version-069806 kubelet[662]: E0717 20:26:21.873819 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.836617 933934 logs.go:138] Found kubelet problem: Jul 17 20:26:30 old-k8s-version-069806 kubelet[662]: E0717 20:26:30.874108 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.837267 933934 logs.go:138] Found kubelet problem: Jul 17 20:26:37 old-k8s-version-069806 kubelet[662]: E0717 20:26:37.530420 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.837473 933934 logs.go:138] Found kubelet problem: Jul 17 20:26:45 old-k8s-version-069806 kubelet[662]: E0717 20:26:45.874342 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.837808 933934 logs.go:138] Found kubelet problem: Jul 17 20:26:46 old-k8s-version-069806 kubelet[662]: E0717 20:26:46.266436 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.838137 933934 logs.go:138] Found kubelet problem: Jul 17 20:26:56 old-k8s-version-069806 kubelet[662]: E0717 20:26:56.873836 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.838324 933934 logs.go:138] Found kubelet problem: Jul 17 20:27:00 old-k8s-version-069806 kubelet[662]: E0717 20:27:00.874137 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.838656 933934 logs.go:138] Found kubelet problem: Jul 17 20:27:11 old-k8s-version-069806 kubelet[662]: E0717 20:27:11.873895 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.838842 933934 logs.go:138] Found kubelet problem: Jul 17 20:27:13 old-k8s-version-069806 kubelet[662]: E0717 20:27:13.874116 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.839175 933934 logs.go:138] Found kubelet problem: Jul 17 20:27:24 old-k8s-version-069806 kubelet[662]: E0717 20:27:24.873860 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.839382 933934 logs.go:138] Found kubelet problem: Jul 17 20:27:28 old-k8s-version-069806 kubelet[662]: E0717 20:27:28.874119 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.839712 933934 logs.go:138] Found kubelet problem: Jul 17 20:27:37 old-k8s-version-069806 kubelet[662]: E0717 20:27:37.874238 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.842301 933934 logs.go:138] Found kubelet problem: Jul 17 20:27:41 old-k8s-version-069806 kubelet[662]: E0717 20:27:41.890695 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:30:15.842898 933934 logs.go:138] Found kubelet problem: Jul 17 20:27:48 old-k8s-version-069806 kubelet[662]: E0717 20:27:48.873801 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.843101 933934 logs.go:138] Found kubelet problem: Jul 17 20:27:56 old-k8s-version-069806 kubelet[662]: E0717 20:27:56.874330 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.843697 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:01 old-k8s-version-069806 kubelet[662]: E0717 20:28:01.743439 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.844026 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:06 old-k8s-version-069806 kubelet[662]: E0717 20:28:06.266883 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.844220 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:07 old-k8s-version-069806 kubelet[662]: E0717 20:28:07.874945 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.844574 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:19 old-k8s-version-069806 kubelet[662]: E0717 20:28:19.877659 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.844765 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:22 old-k8s-version-069806 kubelet[662]: E0717 20:28:22.874306 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.845161 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:33 old-k8s-version-069806 kubelet[662]: E0717 20:28:33.876747 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.845353 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:36 old-k8s-version-069806 kubelet[662]: E0717 20:28:36.874299 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.845696 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:46 old-k8s-version-069806 kubelet[662]: E0717 20:28:46.874249 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.845885 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:48 old-k8s-version-069806 kubelet[662]: E0717 20:28:48.874132 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.846215 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:58 old-k8s-version-069806 kubelet[662]: E0717 20:28:58.873786 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.846400 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:59 old-k8s-version-069806 kubelet[662]: E0717 20:28:59.874380 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.846728 933934 logs.go:138] Found kubelet problem: Jul 17 20:29:11 old-k8s-version-069806 kubelet[662]: E0717 20:29:11.874254 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.846913 933934 logs.go:138] Found kubelet problem: Jul 17 20:29:12 old-k8s-version-069806 kubelet[662]: E0717 20:29:12.874187 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.847274 933934 logs.go:138] Found kubelet problem: Jul 17 20:29:26 old-k8s-version-069806 kubelet[662]: E0717 20:29:26.873840 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.847468 933934 logs.go:138] Found kubelet problem: Jul 17 20:29:27 old-k8s-version-069806 kubelet[662]: E0717 20:29:27.875007 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.847853 933934 logs.go:138] Found kubelet problem: Jul 17 20:29:38 old-k8s-version-069806 kubelet[662]: E0717 20:29:38.874336 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.848049 933934 logs.go:138] Found kubelet problem: Jul 17 20:29:39 old-k8s-version-069806 kubelet[662]: E0717 20:29:39.874118 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.848393 933934 logs.go:138] Found kubelet problem: Jul 17 20:29:49 old-k8s-version-069806 kubelet[662]: E0717 20:29:49.873788 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.848587 933934 logs.go:138] Found kubelet problem: Jul 17 20:29:53 old-k8s-version-069806 kubelet[662]: E0717 20:29:53.876300 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.848920 933934 logs.go:138] Found kubelet problem: Jul 17 20:30:02 old-k8s-version-069806 kubelet[662]: E0717 20:30:02.874275 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:15.849105 933934 logs.go:138] Found kubelet problem: Jul 17 20:30:07 old-k8s-version-069806 kubelet[662]: E0717 20:30:07.874813 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:15.849436 933934 logs.go:138] Found kubelet problem: Jul 17 20:30:14 old-k8s-version-069806 kubelet[662]: E0717 20:30:14.873785 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
I0717 20:30:15.849448 933934 logs.go:123] Gathering logs for coredns [6dc999a35d5b7cabe60ac89e4d0eeda4c16d4bdf70c86b94f130a692166a53d4] ...
I0717 20:30:15.849464 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dc999a35d5b7cabe60ac89e4d0eeda4c16d4bdf70c86b94f130a692166a53d4"
I0717 20:30:15.925345 933934 logs.go:123] Gathering logs for kube-scheduler [d6de24a696fe6c467e60ebcdf508df032838133f4f19467258e96b7f9bfaaf75] ...
I0717 20:30:15.925372 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6de24a696fe6c467e60ebcdf508df032838133f4f19467258e96b7f9bfaaf75"
I0717 20:30:15.979819 933934 logs.go:123] Gathering logs for kube-controller-manager [87f12b148cc5c4d4ab511a09ea624cefb8ca24845ab582cb5171cac706cfab08] ...
I0717 20:30:15.979850 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87f12b148cc5c4d4ab511a09ea624cefb8ca24845ab582cb5171cac706cfab08"
I0717 20:30:16.115833 933934 logs.go:123] Gathering logs for storage-provisioner [89d02f9d8a4af9b6c7a8ebd5d1b7850c32625b8302d57fba3cc86f4f4b3513f5] ...
I0717 20:30:16.115883 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89d02f9d8a4af9b6c7a8ebd5d1b7850c32625b8302d57fba3cc86f4f4b3513f5"
I0717 20:30:16.208534 933934 logs.go:123] Gathering logs for kubernetes-dashboard [44895de2582221beb2740dcf68a96bd5aab3c26b61e7fd33bace4c9538e555f1] ...
I0717 20:30:16.208558 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44895de2582221beb2740dcf68a96bd5aab3c26b61e7fd33bace4c9538e555f1"
I0717 20:30:16.281385 933934 logs.go:123] Gathering logs for kindnet [1f4da85e269235a6b1b8eba0cc55aa66708531b2d77a77c6fd4c1b3d649ff874] ...
I0717 20:30:16.281410 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f4da85e269235a6b1b8eba0cc55aa66708531b2d77a77c6fd4c1b3d649ff874"
I0717 20:30:16.387166 933934 logs.go:123] Gathering logs for dmesg ...
I0717 20:30:16.387204 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0717 20:30:16.451258 933934 logs.go:123] Gathering logs for kube-apiserver [cd37e487a62960309ed82922e0a8a525350123f65c49e5f04cfa975054d7d21c] ...
I0717 20:30:16.451330 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd37e487a62960309ed82922e0a8a525350123f65c49e5f04cfa975054d7d21c"
I0717 20:30:16.530536 933934 logs.go:123] Gathering logs for coredns [03f122fd4abc96b4162eaa9b0110afa04333b847fc74cafe0a68ca5d30990294] ...
I0717 20:30:16.530569 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03f122fd4abc96b4162eaa9b0110afa04333b847fc74cafe0a68ca5d30990294"
I0717 20:30:16.587298 933934 logs.go:123] Gathering logs for kube-scheduler [bcdcb609ea0d91709769d3d302be011074525901ec1c642b6b5f14c7eb1217ef] ...
I0717 20:30:16.587323 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcdcb609ea0d91709769d3d302be011074525901ec1c642b6b5f14c7eb1217ef"
I0717 20:30:16.653727 933934 logs.go:123] Gathering logs for kube-proxy [4157f24e3104cf922fe5deb02fbefd7da3728b0bdc290a7c09353bca677fd608] ...
I0717 20:30:16.653809 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4157f24e3104cf922fe5deb02fbefd7da3728b0bdc290a7c09353bca677fd608"
I0717 20:30:16.705193 933934 logs.go:123] Gathering logs for kube-proxy [3a29c0026d94090fe1242ea0503bd2f78ae5aa0004fecb1cea584a49f5ddc1f4] ...
I0717 20:30:16.705229 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a29c0026d94090fe1242ea0503bd2f78ae5aa0004fecb1cea584a49f5ddc1f4"
I0717 20:30:16.766943 933934 out.go:304] Setting ErrFile to fd 2...
I0717 20:30:16.766971 933934 out.go:338] TERM=,COLORTERM=, which probably does not support color
W0717 20:30:16.767041 933934 out.go:239] X Problems detected in kubelet:
W0717 20:30:16.767063 933934 out.go:239] Jul 17 20:29:49 old-k8s-version-069806 kubelet[662]: E0717 20:29:49.873788 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:16.767086 933934 out.go:239] Jul 17 20:29:53 old-k8s-version-069806 kubelet[662]: E0717 20:29:53.876300 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:16.767107 933934 out.go:239] Jul 17 20:30:02 old-k8s-version-069806 kubelet[662]: E0717 20:30:02.874275 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:16.767119 933934 out.go:239] Jul 17 20:30:07 old-k8s-version-069806 kubelet[662]: E0717 20:30:07.874813 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:16.767134 933934 out.go:239] Jul 17 20:30:14 old-k8s-version-069806 kubelet[662]: E0717 20:30:14.873785 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
I0717 20:30:16.767141 933934 out.go:304] Setting ErrFile to fd 2...
I0717 20:30:16.767155 933934 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 20:30:17.186462 944109 ssh_runner.go:195] Run: openssl version
I0717 20:30:17.192214 944109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0717 20:30:17.202157 944109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0717 20:30:17.205887 944109 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 19:36 /usr/share/ca-certificates/minikubeCA.pem
I0717 20:30:17.206001 944109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0717 20:30:17.212931 944109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0717 20:30:17.222743 944109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/726225.pem && ln -fs /usr/share/ca-certificates/726225.pem /etc/ssl/certs/726225.pem"
I0717 20:30:17.232183 944109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/726225.pem
I0717 20:30:17.235652 944109 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 19:43 /usr/share/ca-certificates/726225.pem
I0717 20:30:17.235738 944109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/726225.pem
I0717 20:30:17.242673 944109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/726225.pem /etc/ssl/certs/51391683.0"
I0717 20:30:17.252550 944109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7262252.pem && ln -fs /usr/share/ca-certificates/7262252.pem /etc/ssl/certs/7262252.pem"
I0717 20:30:17.262172 944109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7262252.pem
I0717 20:30:17.265794 944109 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 19:43 /usr/share/ca-certificates/7262252.pem
I0717 20:30:17.265906 944109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7262252.pem
I0717 20:30:17.273181 944109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7262252.pem /etc/ssl/certs/3ec20f2e.0"
I0717 20:30:17.282534 944109 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0717 20:30:17.285808 944109 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0717 20:30:17.285872 944109 kubeadm.go:392] StartCluster: {Name:embed-certs-195036 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:embed-certs-195036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0717 20:30:17.285957 944109 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0717 20:30:17.286016 944109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0717 20:30:17.324998 944109 cri.go:89] found id: ""
I0717 20:30:17.325088 944109 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0717 20:30:17.334203 944109 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0717 20:30:17.342999 944109 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I0717 20:30:17.343138 944109 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0717 20:30:17.359989 944109 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0717 20:30:17.360027 944109 kubeadm.go:157] found existing configuration files:
I0717 20:30:17.360190 944109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0717 20:30:17.373228 944109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0717 20:30:17.373297 944109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0717 20:30:17.383822 944109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0717 20:30:17.394498 944109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0717 20:30:17.394653 944109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0717 20:30:17.405696 944109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0717 20:30:17.415123 944109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0717 20:30:17.415284 944109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0717 20:30:17.424066 944109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0717 20:30:17.433223 944109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0717 20:30:17.433288 944109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0717 20:30:17.443134 944109 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0717 20:30:17.493573 944109 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
I0717 20:30:17.493992 944109 kubeadm.go:310] [preflight] Running pre-flight checks
I0717 20:30:17.535820 944109 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
I0717 20:30:17.535897 944109 kubeadm.go:310] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1064-aws[0m
I0717 20:30:17.535937 944109 kubeadm.go:310] [0;37mOS[0m: [0;32mLinux[0m
I0717 20:30:17.535986 944109 kubeadm.go:310] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0717 20:30:17.536035 944109 kubeadm.go:310] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0717 20:30:17.536115 944109 kubeadm.go:310] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0717 20:30:17.536167 944109 kubeadm.go:310] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0717 20:30:17.536217 944109 kubeadm.go:310] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0717 20:30:17.536269 944109 kubeadm.go:310] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0717 20:30:17.536320 944109 kubeadm.go:310] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0717 20:30:17.536370 944109 kubeadm.go:310] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0717 20:30:17.536418 944109 kubeadm.go:310] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0717 20:30:17.607759 944109 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0717 20:30:17.607951 944109 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0717 20:30:17.608098 944109 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0717 20:30:17.851611 944109 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0717 20:30:17.855276 944109 out.go:204] - Generating certificates and keys ...
I0717 20:30:17.855480 944109 kubeadm.go:310] [certs] Using existing ca certificate authority
I0717 20:30:17.855603 944109 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0717 20:30:19.097702 944109 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0717 20:30:19.556647 944109 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0717 20:30:19.938776 944109 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0717 20:30:20.514473 944109 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0717 20:30:21.220984 944109 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0717 20:30:21.221396 944109 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-195036 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I0717 20:30:22.011336 944109 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0717 20:30:22.011672 944109 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-195036 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I0717 20:30:22.388283 944109 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0717 20:30:22.933016 944109 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0717 20:30:23.332722 944109 kubeadm.go:310] [certs] Generating "sa" key and public key
I0717 20:30:23.332983 944109 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0717 20:30:23.685063 944109 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0717 20:30:24.029240 944109 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0717 20:30:24.239143 944109 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0717 20:30:24.435933 944109 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0717 20:30:25.150976 944109 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0717 20:30:25.151593 944109 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0717 20:30:25.155145 944109 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0717 20:30:25.157443 944109 out.go:204] - Booting up control plane ...
I0717 20:30:25.157561 944109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0717 20:30:25.158137 944109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0717 20:30:25.159618 944109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0717 20:30:25.176639 944109 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0717 20:30:25.178505 944109 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0717 20:30:25.178956 944109 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0717 20:30:25.282048 944109 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0717 20:30:25.282144 944109 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
I0717 20:30:26.283143 944109 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00154778s
I0717 20:30:26.283234 944109 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0717 20:30:26.768428 933934 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0717 20:30:26.788723 933934 api_server.go:72] duration metric: took 5m56.839602005s to wait for apiserver process to appear ...
I0717 20:30:26.788747 933934 api_server.go:88] waiting for apiserver healthz status ...
I0717 20:30:26.788782 933934 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0717 20:30:26.788852 933934 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0717 20:30:26.854663 933934 cri.go:89] found id: "cd37e487a62960309ed82922e0a8a525350123f65c49e5f04cfa975054d7d21c"
I0717 20:30:26.854684 933934 cri.go:89] found id: "f910fbeca2eebc637c9085897f65b8f1c51713acff4ad29cfb0075f1f3ec10bc"
I0717 20:30:26.854699 933934 cri.go:89] found id: ""
I0717 20:30:26.854706 933934 logs.go:276] 2 containers: [cd37e487a62960309ed82922e0a8a525350123f65c49e5f04cfa975054d7d21c f910fbeca2eebc637c9085897f65b8f1c51713acff4ad29cfb0075f1f3ec10bc]
I0717 20:30:26.854760 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:26.859009 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:26.866186 933934 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0717 20:30:26.866263 933934 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0717 20:30:26.950492 933934 cri.go:89] found id: "687abee290206c4efc81f20fc7a1ed7bbd6be9374ad6bbb86d674109b406be0c"
I0717 20:30:26.950587 933934 cri.go:89] found id: "e570d714588a47af78dc81d95545a2a3f0ac0ae3acb938112dc2a7230b0a49b1"
I0717 20:30:26.950628 933934 cri.go:89] found id: ""
I0717 20:30:26.950673 933934 logs.go:276] 2 containers: [687abee290206c4efc81f20fc7a1ed7bbd6be9374ad6bbb86d674109b406be0c e570d714588a47af78dc81d95545a2a3f0ac0ae3acb938112dc2a7230b0a49b1]
I0717 20:30:26.950792 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:26.957062 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:26.961377 933934 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0717 20:30:26.961473 933934 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0717 20:30:27.059207 933934 cri.go:89] found id: "6dc999a35d5b7cabe60ac89e4d0eeda4c16d4bdf70c86b94f130a692166a53d4"
I0717 20:30:27.059234 933934 cri.go:89] found id: "03f122fd4abc96b4162eaa9b0110afa04333b847fc74cafe0a68ca5d30990294"
I0717 20:30:27.059245 933934 cri.go:89] found id: ""
I0717 20:30:27.059261 933934 logs.go:276] 2 containers: [6dc999a35d5b7cabe60ac89e4d0eeda4c16d4bdf70c86b94f130a692166a53d4 03f122fd4abc96b4162eaa9b0110afa04333b847fc74cafe0a68ca5d30990294]
I0717 20:30:27.059365 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:27.067097 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:27.075702 933934 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0717 20:30:27.075928 933934 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0717 20:30:27.167718 933934 cri.go:89] found id: "d6de24a696fe6c467e60ebcdf508df032838133f4f19467258e96b7f9bfaaf75"
I0717 20:30:27.167820 933934 cri.go:89] found id: "bcdcb609ea0d91709769d3d302be011074525901ec1c642b6b5f14c7eb1217ef"
I0717 20:30:27.167851 933934 cri.go:89] found id: ""
I0717 20:30:27.167885 933934 logs.go:276] 2 containers: [d6de24a696fe6c467e60ebcdf508df032838133f4f19467258e96b7f9bfaaf75 bcdcb609ea0d91709769d3d302be011074525901ec1c642b6b5f14c7eb1217ef]
I0717 20:30:27.168019 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:27.178946 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:27.187082 933934 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0717 20:30:27.187280 933934 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0717 20:30:27.264168 933934 cri.go:89] found id: "4157f24e3104cf922fe5deb02fbefd7da3728b0bdc290a7c09353bca677fd608"
I0717 20:30:27.264295 933934 cri.go:89] found id: "3a29c0026d94090fe1242ea0503bd2f78ae5aa0004fecb1cea584a49f5ddc1f4"
I0717 20:30:27.264323 933934 cri.go:89] found id: ""
I0717 20:30:27.264369 933934 logs.go:276] 2 containers: [4157f24e3104cf922fe5deb02fbefd7da3728b0bdc290a7c09353bca677fd608 3a29c0026d94090fe1242ea0503bd2f78ae5aa0004fecb1cea584a49f5ddc1f4]
I0717 20:30:27.264494 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:27.271255 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:27.276920 933934 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0717 20:30:27.277160 933934 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0717 20:30:27.352163 933934 cri.go:89] found id: "87f12b148cc5c4d4ab511a09ea624cefb8ca24845ab582cb5171cac706cfab08"
I0717 20:30:27.352255 933934 cri.go:89] found id: "fdcb50d15d02756e4b375373cc89aee7407bf6f49813ed607f9a66e11be2c02f"
I0717 20:30:27.352279 933934 cri.go:89] found id: ""
I0717 20:30:27.352338 933934 logs.go:276] 2 containers: [87f12b148cc5c4d4ab511a09ea624cefb8ca24845ab582cb5171cac706cfab08 fdcb50d15d02756e4b375373cc89aee7407bf6f49813ed607f9a66e11be2c02f]
I0717 20:30:27.352453 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:27.367677 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:27.371311 933934 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0717 20:30:27.371398 933934 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0717 20:30:27.429211 933934 cri.go:89] found id: "1f4da85e269235a6b1b8eba0cc55aa66708531b2d77a77c6fd4c1b3d649ff874"
I0717 20:30:27.429289 933934 cri.go:89] found id: "62bdf5d2d244d2f8c5398ebc927cf85a2d9e82aa5dae832ea5ce7dd7bf6e863b"
I0717 20:30:27.429313 933934 cri.go:89] found id: ""
I0717 20:30:27.429367 933934 logs.go:276] 2 containers: [1f4da85e269235a6b1b8eba0cc55aa66708531b2d77a77c6fd4c1b3d649ff874 62bdf5d2d244d2f8c5398ebc927cf85a2d9e82aa5dae832ea5ce7dd7bf6e863b]
I0717 20:30:27.429472 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:27.436626 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:27.440515 933934 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0717 20:30:27.440675 933934 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0717 20:30:27.514271 933934 cri.go:89] found id: "57b729d79f17dbcfa3bb0c46429504d8399588e217c6bd8ca2e38947ff5d4cf0"
I0717 20:30:27.514316 933934 cri.go:89] found id: "89d02f9d8a4af9b6c7a8ebd5d1b7850c32625b8302d57fba3cc86f4f4b3513f5"
I0717 20:30:27.514322 933934 cri.go:89] found id: ""
I0717 20:30:27.514330 933934 logs.go:276] 2 containers: [57b729d79f17dbcfa3bb0c46429504d8399588e217c6bd8ca2e38947ff5d4cf0 89d02f9d8a4af9b6c7a8ebd5d1b7850c32625b8302d57fba3cc86f4f4b3513f5]
I0717 20:30:27.514397 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:27.519671 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:27.525084 933934 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0717 20:30:27.525187 933934 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0717 20:30:27.589900 933934 cri.go:89] found id: "44895de2582221beb2740dcf68a96bd5aab3c26b61e7fd33bace4c9538e555f1"
I0717 20:30:27.589925 933934 cri.go:89] found id: ""
I0717 20:30:27.589933 933934 logs.go:276] 1 containers: [44895de2582221beb2740dcf68a96bd5aab3c26b61e7fd33bace4c9538e555f1]
I0717 20:30:27.589997 933934 ssh_runner.go:195] Run: which crictl
I0717 20:30:27.596715 933934 logs.go:123] Gathering logs for describe nodes ...
I0717 20:30:27.596747 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0717 20:30:27.947618 933934 logs.go:123] Gathering logs for kube-scheduler [d6de24a696fe6c467e60ebcdf508df032838133f4f19467258e96b7f9bfaaf75] ...
I0717 20:30:27.947656 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6de24a696fe6c467e60ebcdf508df032838133f4f19467258e96b7f9bfaaf75"
I0717 20:30:28.092581 933934 logs.go:123] Gathering logs for kube-proxy [3a29c0026d94090fe1242ea0503bd2f78ae5aa0004fecb1cea584a49f5ddc1f4] ...
I0717 20:30:28.092616 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a29c0026d94090fe1242ea0503bd2f78ae5aa0004fecb1cea584a49f5ddc1f4"
I0717 20:30:28.170361 933934 logs.go:123] Gathering logs for storage-provisioner [89d02f9d8a4af9b6c7a8ebd5d1b7850c32625b8302d57fba3cc86f4f4b3513f5] ...
I0717 20:30:28.170432 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89d02f9d8a4af9b6c7a8ebd5d1b7850c32625b8302d57fba3cc86f4f4b3513f5"
I0717 20:30:28.276709 933934 logs.go:123] Gathering logs for kube-proxy [4157f24e3104cf922fe5deb02fbefd7da3728b0bdc290a7c09353bca677fd608] ...
I0717 20:30:28.276780 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4157f24e3104cf922fe5deb02fbefd7da3728b0bdc290a7c09353bca677fd608"
I0717 20:30:28.344367 933934 logs.go:123] Gathering logs for kube-controller-manager [fdcb50d15d02756e4b375373cc89aee7407bf6f49813ed607f9a66e11be2c02f] ...
I0717 20:30:28.344447 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdcb50d15d02756e4b375373cc89aee7407bf6f49813ed607f9a66e11be2c02f"
I0717 20:30:28.451110 933934 logs.go:123] Gathering logs for kindnet [1f4da85e269235a6b1b8eba0cc55aa66708531b2d77a77c6fd4c1b3d649ff874] ...
I0717 20:30:28.451194 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f4da85e269235a6b1b8eba0cc55aa66708531b2d77a77c6fd4c1b3d649ff874"
I0717 20:30:28.542709 933934 logs.go:123] Gathering logs for kindnet [62bdf5d2d244d2f8c5398ebc927cf85a2d9e82aa5dae832ea5ce7dd7bf6e863b] ...
I0717 20:30:28.542802 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62bdf5d2d244d2f8c5398ebc927cf85a2d9e82aa5dae832ea5ce7dd7bf6e863b"
I0717 20:30:28.629609 933934 logs.go:123] Gathering logs for kubelet ...
I0717 20:30:28.629687 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0717 20:30:28.722241 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:47 old-k8s-version-069806 kubelet[662]: E0717 20:24:47.042220 662 reflector.go:138] object-"kube-system"/"kube-proxy-token-7976r": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-7976r" is forbidden: User "system:node:old-k8s-version-069806" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-069806' and this object
W0717 20:30:28.722576 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:47 old-k8s-version-069806 kubelet[662]: E0717 20:24:47.082566 662 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-069806" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-069806' and this object
W0717 20:30:28.722820 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:47 old-k8s-version-069806 kubelet[662]: E0717 20:24:47.082721 662 reflector.go:138] object-"kube-system"/"coredns-token-mrwzx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-mrwzx" is forbidden: User "system:node:old-k8s-version-069806" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-069806' and this object
W0717 20:30:28.723070 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:47 old-k8s-version-069806 kubelet[662]: E0717 20:24:47.082802 662 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-069806" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-069806' and this object
W0717 20:30:28.723310 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:47 old-k8s-version-069806 kubelet[662]: E0717 20:24:47.082891 662 reflector.go:138] object-"kube-system"/"kindnet-token-g6tv7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-g6tv7" is forbidden: User "system:node:old-k8s-version-069806" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-069806' and this object
W0717 20:30:28.727196 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:47 old-k8s-version-069806 kubelet[662]: E0717 20:24:47.202724 662 reflector.go:138] object-"kube-system"/"metrics-server-token-fksnv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-fksnv" is forbidden: User "system:node:old-k8s-version-069806" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-069806' and this object
W0717 20:30:28.730359 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:47 old-k8s-version-069806 kubelet[662]: E0717 20:24:47.202810 662 reflector.go:138] object-"kube-system"/"storage-provisioner-token-rtf2k": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-rtf2k" is forbidden: User "system:node:old-k8s-version-069806" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-069806' and this object
W0717 20:30:28.730625 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:47 old-k8s-version-069806 kubelet[662]: E0717 20:24:47.202881 662 reflector.go:138] object-"default"/"default-token-9ftpp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-9ftpp" is forbidden: User "system:node:old-k8s-version-069806" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-069806' and this object
W0717 20:30:28.738782 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:49 old-k8s-version-069806 kubelet[662]: E0717 20:24:49.183838 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:30:28.745235 933934 logs.go:138] Found kubelet problem: Jul 17 20:24:50 old-k8s-version-069806 kubelet[662]: E0717 20:24:50.150410 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.748361 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:01 old-k8s-version-069806 kubelet[662]: E0717 20:25:01.900030 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:30:28.750574 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:11 old-k8s-version-069806 kubelet[662]: E0717 20:25:11.253734 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.755725 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:12 old-k8s-version-069806 kubelet[662]: E0717 20:25:12.256602 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.755972 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:12 old-k8s-version-069806 kubelet[662]: E0717 20:25:12.874626 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.756377 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:16 old-k8s-version-069806 kubelet[662]: E0717 20:25:16.266437 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.757278 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:20 old-k8s-version-069806 kubelet[662]: E0717 20:25:20.299236 662 pod_workers.go:191] Error syncing pod b733c4a6-f6de-426d-86c9-67948261d437 ("storage-provisioner_kube-system(b733c4a6-f6de-426d-86c9-67948261d437)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b733c4a6-f6de-426d-86c9-67948261d437)"
W0717 20:30:28.760001 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:27 old-k8s-version-069806 kubelet[662]: E0717 20:25:27.898396 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:30:28.761036 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:30 old-k8s-version-069806 kubelet[662]: E0717 20:25:30.313468 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.761536 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:36 old-k8s-version-069806 kubelet[662]: E0717 20:25:36.266512 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.761749 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:40 old-k8s-version-069806 kubelet[662]: E0717 20:25:40.874479 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.762116 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:51 old-k8s-version-069806 kubelet[662]: E0717 20:25:51.875881 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.762616 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:53 old-k8s-version-069806 kubelet[662]: E0717 20:25:53.400984 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.762983 933934 logs.go:138] Found kubelet problem: Jul 17 20:25:56 old-k8s-version-069806 kubelet[662]: E0717 20:25:56.266353 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.763201 933934 logs.go:138] Found kubelet problem: Jul 17 20:26:03 old-k8s-version-069806 kubelet[662]: E0717 20:26:03.874232 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.763561 933934 logs.go:138] Found kubelet problem: Jul 17 20:26:09 old-k8s-version-069806 kubelet[662]: E0717 20:26:09.873806 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.769545 933934 logs.go:138] Found kubelet problem: Jul 17 20:26:17 old-k8s-version-069806 kubelet[662]: E0717 20:26:17.885290 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:30:28.769933 933934 logs.go:138] Found kubelet problem: Jul 17 20:26:21 old-k8s-version-069806 kubelet[662]: E0717 20:26:21.873819 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.770148 933934 logs.go:138] Found kubelet problem: Jul 17 20:26:30 old-k8s-version-069806 kubelet[662]: E0717 20:26:30.874108 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.770789 933934 logs.go:138] Found kubelet problem: Jul 17 20:26:37 old-k8s-version-069806 kubelet[662]: E0717 20:26:37.530420 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.771010 933934 logs.go:138] Found kubelet problem: Jul 17 20:26:45 old-k8s-version-069806 kubelet[662]: E0717 20:26:45.874342 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.771386 933934 logs.go:138] Found kubelet problem: Jul 17 20:26:46 old-k8s-version-069806 kubelet[662]: E0717 20:26:46.266436 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.771754 933934 logs.go:138] Found kubelet problem: Jul 17 20:26:56 old-k8s-version-069806 kubelet[662]: E0717 20:26:56.873836 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.771969 933934 logs.go:138] Found kubelet problem: Jul 17 20:27:00 old-k8s-version-069806 kubelet[662]: E0717 20:27:00.874137 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.772350 933934 logs.go:138] Found kubelet problem: Jul 17 20:27:11 old-k8s-version-069806 kubelet[662]: E0717 20:27:11.873895 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.772565 933934 logs.go:138] Found kubelet problem: Jul 17 20:27:13 old-k8s-version-069806 kubelet[662]: E0717 20:27:13.874116 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.772943 933934 logs.go:138] Found kubelet problem: Jul 17 20:27:24 old-k8s-version-069806 kubelet[662]: E0717 20:27:24.873860 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.773157 933934 logs.go:138] Found kubelet problem: Jul 17 20:27:28 old-k8s-version-069806 kubelet[662]: E0717 20:27:28.874119 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.773519 933934 logs.go:138] Found kubelet problem: Jul 17 20:27:37 old-k8s-version-069806 kubelet[662]: E0717 20:27:37.874238 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.776173 933934 logs.go:138] Found kubelet problem: Jul 17 20:27:41 old-k8s-version-069806 kubelet[662]: E0717 20:27:41.890695 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:30:28.776548 933934 logs.go:138] Found kubelet problem: Jul 17 20:27:48 old-k8s-version-069806 kubelet[662]: E0717 20:27:48.873801 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.776763 933934 logs.go:138] Found kubelet problem: Jul 17 20:27:56 old-k8s-version-069806 kubelet[662]: E0717 20:27:56.874330 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.777477 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:01 old-k8s-version-069806 kubelet[662]: E0717 20:28:01.743439 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.777845 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:06 old-k8s-version-069806 kubelet[662]: E0717 20:28:06.266883 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.778060 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:07 old-k8s-version-069806 kubelet[662]: E0717 20:28:07.874945 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.778428 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:19 old-k8s-version-069806 kubelet[662]: E0717 20:28:19.877659 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.778647 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:22 old-k8s-version-069806 kubelet[662]: E0717 20:28:22.874306 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.779018 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:33 old-k8s-version-069806 kubelet[662]: E0717 20:28:33.876747 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.779245 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:36 old-k8s-version-069806 kubelet[662]: E0717 20:28:36.874299 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.779617 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:46 old-k8s-version-069806 kubelet[662]: E0717 20:28:46.874249 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.779835 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:48 old-k8s-version-069806 kubelet[662]: E0717 20:28:48.874132 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.780232 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:58 old-k8s-version-069806 kubelet[662]: E0717 20:28:58.873786 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.780448 933934 logs.go:138] Found kubelet problem: Jul 17 20:28:59 old-k8s-version-069806 kubelet[662]: E0717 20:28:59.874380 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.780808 933934 logs.go:138] Found kubelet problem: Jul 17 20:29:11 old-k8s-version-069806 kubelet[662]: E0717 20:29:11.874254 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.781029 933934 logs.go:138] Found kubelet problem: Jul 17 20:29:12 old-k8s-version-069806 kubelet[662]: E0717 20:29:12.874187 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.781398 933934 logs.go:138] Found kubelet problem: Jul 17 20:29:26 old-k8s-version-069806 kubelet[662]: E0717 20:29:26.873840 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.781614 933934 logs.go:138] Found kubelet problem: Jul 17 20:29:27 old-k8s-version-069806 kubelet[662]: E0717 20:29:27.875007 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.781979 933934 logs.go:138] Found kubelet problem: Jul 17 20:29:38 old-k8s-version-069806 kubelet[662]: E0717 20:29:38.874336 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.782198 933934 logs.go:138] Found kubelet problem: Jul 17 20:29:39 old-k8s-version-069806 kubelet[662]: E0717 20:29:39.874118 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.782564 933934 logs.go:138] Found kubelet problem: Jul 17 20:29:49 old-k8s-version-069806 kubelet[662]: E0717 20:29:49.873788 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.782843 933934 logs.go:138] Found kubelet problem: Jul 17 20:29:53 old-k8s-version-069806 kubelet[662]: E0717 20:29:53.876300 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.783215 933934 logs.go:138] Found kubelet problem: Jul 17 20:30:02 old-k8s-version-069806 kubelet[662]: E0717 20:30:02.874275 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.783432 933934 logs.go:138] Found kubelet problem: Jul 17 20:30:07 old-k8s-version-069806 kubelet[662]: E0717 20:30:07.874813 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:28.783792 933934 logs.go:138] Found kubelet problem: Jul 17 20:30:14 old-k8s-version-069806 kubelet[662]: E0717 20:30:14.873785 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:28.786430 933934 logs.go:138] Found kubelet problem: Jul 17 20:30:22 old-k8s-version-069806 kubelet[662]: E0717 20:30:22.883566 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:30:28.786798 933934 logs.go:138] Found kubelet problem: Jul 17 20:30:25 old-k8s-version-069806 kubelet[662]: E0717 20:30:25.883025 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
I0717 20:30:28.786828 933934 logs.go:123] Gathering logs for coredns [6dc999a35d5b7cabe60ac89e4d0eeda4c16d4bdf70c86b94f130a692166a53d4] ...
I0717 20:30:28.786863 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dc999a35d5b7cabe60ac89e4d0eeda4c16d4bdf70c86b94f130a692166a53d4"
I0717 20:30:28.833770 933934 logs.go:123] Gathering logs for coredns [03f122fd4abc96b4162eaa9b0110afa04333b847fc74cafe0a68ca5d30990294] ...
I0717 20:30:28.833849 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03f122fd4abc96b4162eaa9b0110afa04333b847fc74cafe0a68ca5d30990294"
I0717 20:30:28.895551 933934 logs.go:123] Gathering logs for kube-scheduler [bcdcb609ea0d91709769d3d302be011074525901ec1c642b6b5f14c7eb1217ef] ...
I0717 20:30:28.895628 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bcdcb609ea0d91709769d3d302be011074525901ec1c642b6b5f14c7eb1217ef"
I0717 20:30:28.995745 933934 logs.go:123] Gathering logs for containerd ...
I0717 20:30:28.995823 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0717 20:30:29.100068 933934 logs.go:123] Gathering logs for dmesg ...
I0717 20:30:29.100146 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0717 20:30:29.125900 933934 logs.go:123] Gathering logs for kube-apiserver [cd37e487a62960309ed82922e0a8a525350123f65c49e5f04cfa975054d7d21c] ...
I0717 20:30:29.126061 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd37e487a62960309ed82922e0a8a525350123f65c49e5f04cfa975054d7d21c"
I0717 20:30:29.268391 933934 logs.go:123] Gathering logs for etcd [e570d714588a47af78dc81d95545a2a3f0ac0ae3acb938112dc2a7230b0a49b1] ...
I0717 20:30:29.268472 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e570d714588a47af78dc81d95545a2a3f0ac0ae3acb938112dc2a7230b0a49b1"
I0717 20:30:29.345202 933934 logs.go:123] Gathering logs for kube-controller-manager [87f12b148cc5c4d4ab511a09ea624cefb8ca24845ab582cb5171cac706cfab08] ...
I0717 20:30:29.345282 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 87f12b148cc5c4d4ab511a09ea624cefb8ca24845ab582cb5171cac706cfab08"
I0717 20:30:29.445718 933934 logs.go:123] Gathering logs for container status ...
I0717 20:30:29.445797 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0717 20:30:29.515085 933934 logs.go:123] Gathering logs for kube-apiserver [f910fbeca2eebc637c9085897f65b8f1c51713acff4ad29cfb0075f1f3ec10bc] ...
I0717 20:30:29.515163 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f910fbeca2eebc637c9085897f65b8f1c51713acff4ad29cfb0075f1f3ec10bc"
I0717 20:30:29.590224 933934 logs.go:123] Gathering logs for etcd [687abee290206c4efc81f20fc7a1ed7bbd6be9374ad6bbb86d674109b406be0c] ...
I0717 20:30:29.590313 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 687abee290206c4efc81f20fc7a1ed7bbd6be9374ad6bbb86d674109b406be0c"
I0717 20:30:29.682136 933934 logs.go:123] Gathering logs for storage-provisioner [57b729d79f17dbcfa3bb0c46429504d8399588e217c6bd8ca2e38947ff5d4cf0] ...
I0717 20:30:29.682207 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57b729d79f17dbcfa3bb0c46429504d8399588e217c6bd8ca2e38947ff5d4cf0"
I0717 20:30:29.861255 933934 logs.go:123] Gathering logs for kubernetes-dashboard [44895de2582221beb2740dcf68a96bd5aab3c26b61e7fd33bace4c9538e555f1] ...
I0717 20:30:29.861335 933934 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44895de2582221beb2740dcf68a96bd5aab3c26b61e7fd33bace4c9538e555f1"
I0717 20:30:30.009936 933934 out.go:304] Setting ErrFile to fd 2...
I0717 20:30:30.010027 933934 out.go:338] TERM=,COLORTERM=, which probably does not support color
W0717 20:30:30.010116 933934 out.go:239] X Problems detected in kubelet:
W0717 20:30:30.010456 933934 out.go:239] Jul 17 20:30:02 old-k8s-version-069806 kubelet[662]: E0717 20:30:02.874275 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:30.010578 933934 out.go:239] Jul 17 20:30:07 old-k8s-version-069806 kubelet[662]: E0717 20:30:07.874813 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:30:30.010645 933934 out.go:239] Jul 17 20:30:14 old-k8s-version-069806 kubelet[662]: E0717 20:30:14.873785 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
W0717 20:30:30.010680 933934 out.go:239] Jul 17 20:30:22 old-k8s-version-069806 kubelet[662]: E0717 20:30:22.883566 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:30:30.010757 933934 out.go:239] Jul 17 20:30:25 old-k8s-version-069806 kubelet[662]: E0717 20:30:25.883025 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
I0717 20:30:30.010795 933934 out.go:304] Setting ErrFile to fd 2...
I0717 20:30:30.010911 933934 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 20:30:34.785676 944109 kubeadm.go:310] [api-check] The API server is healthy after 8.50254231s
I0717 20:30:34.805932 944109 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0717 20:30:34.821318 944109 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0717 20:30:34.846598 944109 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0717 20:30:34.846801 944109 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-195036 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0717 20:30:34.859418 944109 kubeadm.go:310] [bootstrap-token] Using token: 8ozjd6.lvmlncn5kssvgxzc
I0717 20:30:34.861682 944109 out.go:204] - Configuring RBAC rules ...
I0717 20:30:34.861809 944109 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0717 20:30:34.867427 944109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0717 20:30:34.875845 944109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0717 20:30:34.879906 944109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0717 20:30:34.886701 944109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0717 20:30:34.890572 944109 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0717 20:30:35.193564 944109 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0717 20:30:35.631321 944109 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0717 20:30:36.194574 944109 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0717 20:30:36.195914 944109 kubeadm.go:310]
I0717 20:30:36.195993 944109 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0717 20:30:36.196005 944109 kubeadm.go:310]
I0717 20:30:36.196115 944109 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0717 20:30:36.196126 944109 kubeadm.go:310]
I0717 20:30:36.196150 944109 kubeadm.go:310] mkdir -p $HOME/.kube
I0717 20:30:36.196211 944109 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0717 20:30:36.196260 944109 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0717 20:30:36.196265 944109 kubeadm.go:310]
I0717 20:30:36.196316 944109 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0717 20:30:36.196321 944109 kubeadm.go:310]
I0717 20:30:36.196367 944109 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0717 20:30:36.196371 944109 kubeadm.go:310]
I0717 20:30:36.196421 944109 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0717 20:30:36.196493 944109 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0717 20:30:36.196558 944109 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0717 20:30:36.196562 944109 kubeadm.go:310]
I0717 20:30:36.196643 944109 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0717 20:30:36.196716 944109 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0717 20:30:36.196725 944109 kubeadm.go:310]
I0717 20:30:36.196805 944109 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8ozjd6.lvmlncn5kssvgxzc \
I0717 20:30:36.196905 944109 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:34e0ccb0f7b1bf9e782bfb56d5c2100b0d5ea9242ea9a17a9e471a56e94c8d3a \
I0717 20:30:36.196924 944109 kubeadm.go:310] --control-plane
I0717 20:30:36.196933 944109 kubeadm.go:310]
I0717 20:30:36.197014 944109 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0717 20:30:36.197019 944109 kubeadm.go:310]
I0717 20:30:36.197097 944109 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8ozjd6.lvmlncn5kssvgxzc \
I0717 20:30:36.197195 944109 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:34e0ccb0f7b1bf9e782bfb56d5c2100b0d5ea9242ea9a17a9e471a56e94c8d3a
I0717 20:30:36.201985 944109 kubeadm.go:310] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1064-aws\n", err: exit status 1
I0717 20:30:36.202102 944109 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0717 20:30:36.202122 944109 cni.go:84] Creating CNI manager for ""
I0717 20:30:36.202130 944109 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0717 20:30:36.204460 944109 out.go:177] * Configuring CNI (Container Networking Interface) ...
I0717 20:30:36.206408 944109 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0717 20:30:36.210930 944109 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
I0717 20:30:36.210951 944109 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
I0717 20:30:36.231518 944109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0717 20:30:36.537367 944109 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0717 20:30:36.537513 944109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 20:30:36.537612 944109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-195036 minikube.k8s.io/updated_at=2024_07_17T20_30_36_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6 minikube.k8s.io/name=embed-certs-195036 minikube.k8s.io/primary=true
I0717 20:30:36.750086 944109 ops.go:34] apiserver oom_adj: -16
I0717 20:30:36.750200 944109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0717 20:30:40.013310 933934 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0717 20:30:40.037373 933934 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I0717 20:30:40.040368 933934 out.go:177]
W0717 20:30:40.043794 933934 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0717 20:30:40.043842 933934 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
W0717 20:30:40.043863 933934 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
W0717 20:30:40.043869 933934 out.go:239] *
W0717 20:30:40.044970 933934 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0717 20:30:40.055997 933934 out.go:177]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
3ca5b52c96d52 523cad1a4df73 2 minutes ago Exited dashboard-metrics-scraper 5 96a9dfce9cc27 dashboard-metrics-scraper-8d5bb5db8-pmx8k
57b729d79f17d ba04bb24b9575 5 minutes ago Running storage-provisioner 3 513c5ae77c72d storage-provisioner
44895de258222 20b332c9a70d8 5 minutes ago Running kubernetes-dashboard 0 9a0159cfa45e6 kubernetes-dashboard-cd95d586-bllqh
6dc999a35d5b7 db91994f4ee8f 5 minutes ago Running coredns 1 97af6107c1659 coredns-74ff55c5b-9djzb
4157f24e3104c 25a5233254979 5 minutes ago Running kube-proxy 1 aa53d0fbf792e kube-proxy-gh8ms
1f4da85e26923 5e32961ddcea3 5 minutes ago Running kindnet-cni 1 c7d05390e7b01 kindnet-mv7j6
89d02f9d8a4af ba04bb24b9575 5 minutes ago Exited storage-provisioner 2 513c5ae77c72d storage-provisioner
6e6804e5bfb2a 1611cd07b61d5 5 minutes ago Running busybox 1 3abb1c3edbfdb busybox
d6de24a696fe6 e7605f88f17d6 6 minutes ago Running kube-scheduler 1 921a8e28d120d kube-scheduler-old-k8s-version-069806
cd37e487a6296 2c08bbbc02d3a 6 minutes ago Running kube-apiserver 1 e5b3f9d873ac4 kube-apiserver-old-k8s-version-069806
87f12b148cc5c 1df8a2b116bd1 6 minutes ago Running kube-controller-manager 1 9632a066efe2f kube-controller-manager-old-k8s-version-069806
687abee290206 05b738aa1bc63 6 minutes ago Running etcd 1 88d25167c7ef6 etcd-old-k8s-version-069806
710cfa588d31c 1611cd07b61d5 6 minutes ago Exited busybox 0 0b70057e83b3f busybox
03f122fd4abc9 db91994f4ee8f 8 minutes ago Exited coredns 0 7d9f906126a09 coredns-74ff55c5b-9djzb
62bdf5d2d244d 5e32961ddcea3 8 minutes ago Exited kindnet-cni 0 a941725ff33f2 kindnet-mv7j6
3a29c0026d940 25a5233254979 8 minutes ago Exited kube-proxy 0 c290d849c1f38 kube-proxy-gh8ms
bcdcb609ea0d9 e7605f88f17d6 9 minutes ago Exited kube-scheduler 0 4ecc926200169 kube-scheduler-old-k8s-version-069806
fdcb50d15d027 1df8a2b116bd1 9 minutes ago Exited kube-controller-manager 0 87cadf2efa3e4 kube-controller-manager-old-k8s-version-069806
e570d714588a4 05b738aa1bc63 9 minutes ago Exited etcd 0 a510894671347 etcd-old-k8s-version-069806
f910fbeca2eeb 2c08bbbc02d3a 9 minutes ago Exited kube-apiserver 0 340e3a2621932 kube-apiserver-old-k8s-version-069806
==> containerd <==
Jul 17 20:26:36 old-k8s-version-069806 containerd[571]: time="2024-07-17T20:26:36.901707256Z" level=info msg="CreateContainer within sandbox \"96a9dfce9cc278e4f991664520cbcf41632392376e90ddec55990c1d6239ef5a\" for name:\"dashboard-metrics-scraper\" attempt:4 returns container id \"36655d74bd12cc0972863cc002143c167af24a2e5eb25095a3087e9747f3c042\""
Jul 17 20:26:36 old-k8s-version-069806 containerd[571]: time="2024-07-17T20:26:36.902459747Z" level=info msg="StartContainer for \"36655d74bd12cc0972863cc002143c167af24a2e5eb25095a3087e9747f3c042\""
Jul 17 20:26:36 old-k8s-version-069806 containerd[571]: time="2024-07-17T20:26:36.978064180Z" level=info msg="StartContainer for \"36655d74bd12cc0972863cc002143c167af24a2e5eb25095a3087e9747f3c042\" returns successfully"
Jul 17 20:26:37 old-k8s-version-069806 containerd[571]: time="2024-07-17T20:26:37.022249035Z" level=info msg="shim disconnected" id=36655d74bd12cc0972863cc002143c167af24a2e5eb25095a3087e9747f3c042 namespace=k8s.io
Jul 17 20:26:37 old-k8s-version-069806 containerd[571]: time="2024-07-17T20:26:37.022691404Z" level=warning msg="cleaning up after shim disconnected" id=36655d74bd12cc0972863cc002143c167af24a2e5eb25095a3087e9747f3c042 namespace=k8s.io
Jul 17 20:26:37 old-k8s-version-069806 containerd[571]: time="2024-07-17T20:26:37.022920519Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jul 17 20:26:37 old-k8s-version-069806 containerd[571]: time="2024-07-17T20:26:37.531495692Z" level=info msg="RemoveContainer for \"ab72fba8faa2c5f620b18e18a28109d77256c56d9f62ca303fd730fdce33c6c4\""
Jul 17 20:26:37 old-k8s-version-069806 containerd[571]: time="2024-07-17T20:26:37.538765998Z" level=info msg="RemoveContainer for \"ab72fba8faa2c5f620b18e18a28109d77256c56d9f62ca303fd730fdce33c6c4\" returns successfully"
Jul 17 20:27:41 old-k8s-version-069806 containerd[571]: time="2024-07-17T20:27:41.874864103Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 17 20:27:41 old-k8s-version-069806 containerd[571]: time="2024-07-17T20:27:41.887384762Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
Jul 17 20:27:41 old-k8s-version-069806 containerd[571]: time="2024-07-17T20:27:41.889328633Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Jul 17 20:27:41 old-k8s-version-069806 containerd[571]: time="2024-07-17T20:27:41.889411043Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Jul 17 20:28:00 old-k8s-version-069806 containerd[571]: time="2024-07-17T20:28:00.875621820Z" level=info msg="CreateContainer within sandbox \"96a9dfce9cc278e4f991664520cbcf41632392376e90ddec55990c1d6239ef5a\" for container name:\"dashboard-metrics-scraper\" attempt:5"
Jul 17 20:28:00 old-k8s-version-069806 containerd[571]: time="2024-07-17T20:28:00.890865318Z" level=info msg="CreateContainer within sandbox \"96a9dfce9cc278e4f991664520cbcf41632392376e90ddec55990c1d6239ef5a\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"3ca5b52c96d52265da32472b962c89e0ca2efe33f6b1148f0a6a9cb84fcd1004\""
Jul 17 20:28:00 old-k8s-version-069806 containerd[571]: time="2024-07-17T20:28:00.891461185Z" level=info msg="StartContainer for \"3ca5b52c96d52265da32472b962c89e0ca2efe33f6b1148f0a6a9cb84fcd1004\""
Jul 17 20:28:00 old-k8s-version-069806 containerd[571]: time="2024-07-17T20:28:00.968680507Z" level=info msg="StartContainer for \"3ca5b52c96d52265da32472b962c89e0ca2efe33f6b1148f0a6a9cb84fcd1004\" returns successfully"
Jul 17 20:28:00 old-k8s-version-069806 containerd[571]: time="2024-07-17T20:28:00.993245012Z" level=info msg="shim disconnected" id=3ca5b52c96d52265da32472b962c89e0ca2efe33f6b1148f0a6a9cb84fcd1004 namespace=k8s.io
Jul 17 20:28:00 old-k8s-version-069806 containerd[571]: time="2024-07-17T20:28:00.993320940Z" level=warning msg="cleaning up after shim disconnected" id=3ca5b52c96d52265da32472b962c89e0ca2efe33f6b1148f0a6a9cb84fcd1004 namespace=k8s.io
Jul 17 20:28:00 old-k8s-version-069806 containerd[571]: time="2024-07-17T20:28:00.993341502Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jul 17 20:28:01 old-k8s-version-069806 containerd[571]: time="2024-07-17T20:28:01.748386202Z" level=info msg="RemoveContainer for \"36655d74bd12cc0972863cc002143c167af24a2e5eb25095a3087e9747f3c042\""
Jul 17 20:28:01 old-k8s-version-069806 containerd[571]: time="2024-07-17T20:28:01.755886182Z" level=info msg="RemoveContainer for \"36655d74bd12cc0972863cc002143c167af24a2e5eb25095a3087e9747f3c042\" returns successfully"
Jul 17 20:30:22 old-k8s-version-069806 containerd[571]: time="2024-07-17T20:30:22.875674825Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 17 20:30:22 old-k8s-version-069806 containerd[571]: time="2024-07-17T20:30:22.880862805Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
Jul 17 20:30:22 old-k8s-version-069806 containerd[571]: time="2024-07-17T20:30:22.882276146Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Jul 17 20:30:22 old-k8s-version-069806 containerd[571]: time="2024-07-17T20:30:22.882359943Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
==> coredns [03f122fd4abc96b4162eaa9b0110afa04333b847fc74cafe0a68ca5d30990294] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:49567 - 9284 "HINFO IN 8087561133339694968.926235833535079580. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.024484485s
==> coredns [6dc999a35d5b7cabe60ac89e4d0eeda4c16d4bdf70c86b94f130a692166a53d4] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:50910 - 39021 "HINFO IN 3340031814968280805.4266197550235982388. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022556982s
==> describe nodes <==
Name: old-k8s-version-069806
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=old-k8s-version-069806
kubernetes.io/os=linux
minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6
minikube.k8s.io/name=old-k8s-version-069806
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_07_17T20_21_48_0700
minikube.k8s.io/version=v1.33.1
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 17 Jul 2024 20:21:44 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: old-k8s-version-069806
AcquireTime: <unset>
RenewTime: Wed, 17 Jul 2024 20:30:39 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 17 Jul 2024 20:25:47 +0000 Wed, 17 Jul 2024 20:21:38 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 17 Jul 2024 20:25:47 +0000 Wed, 17 Jul 2024 20:21:38 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 17 Jul 2024 20:25:47 +0000 Wed, 17 Jul 2024 20:21:38 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 17 Jul 2024 20:25:47 +0000 Wed, 17 Jul 2024 20:22:03 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.76.2
Hostname: old-k8s-version-069806
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022360Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022360Ki
pods: 110
System Info:
Machine ID: f40817a4f4664320be724278d57ca186
System UUID: efd66201-957c-4716-b850-0ae965fa2ba0
Boot ID: d15a549a-b231-4e52-8730-2a5b60959e25
Kernel Version: 5.15.0-1064-aws
OS Image: Ubuntu 22.04.4 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: containerd://1.7.19
Kubelet Version: v1.20.0
Kube-Proxy Version: v1.20.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m43s
kube-system coredns-74ff55c5b-9djzb 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (2%!)(MISSING) 8m38s
kube-system etcd-old-k8s-version-069806 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (1%!)(MISSING) 0 (0%!)(MISSING) 8m45s
kube-system kindnet-mv7j6 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING) 8m38s
kube-system kube-apiserver-old-k8s-version-069806 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m45s
kube-system kube-controller-manager-old-k8s-version-069806 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m45s
kube-system kube-proxy-gh8ms 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m38s
kube-system kube-scheduler-old-k8s-version-069806 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m45s
kube-system metrics-server-9975d5f86-v4sfl 100m (5%!)(MISSING) 0 (0%!)(MISSING) 200Mi (2%!)(MISSING) 0 (0%!)(MISSING) 6m31s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m37s
kubernetes-dashboard dashboard-metrics-scraper-8d5bb5db8-pmx8k 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m37s
kubernetes-dashboard kubernetes-dashboard-cd95d586-bllqh 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m37s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%!)(MISSING) 100m (5%!)(MISSING)
memory 420Mi (5%!)(MISSING) 220Mi (2%!)(MISSING)
ephemeral-storage 100Mi (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-32Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-64Ki 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 9m5s (x4 over 9m5s) kubelet Node old-k8s-version-069806 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 9m5s (x4 over 9m5s) kubelet Node old-k8s-version-069806 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 9m5s (x5 over 9m5s) kubelet Node old-k8s-version-069806 status is now: NodeHasSufficientPID
Normal Starting 8m46s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 8m46s kubelet Node old-k8s-version-069806 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m46s kubelet Node old-k8s-version-069806 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m46s kubelet Node old-k8s-version-069806 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8m45s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 8m38s kubelet Node old-k8s-version-069806 status is now: NodeReady
Normal Starting 8m36s kube-proxy Starting kube-proxy.
Normal Starting 6m4s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 6m4s (x8 over 6m4s) kubelet Node old-k8s-version-069806 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 6m4s (x8 over 6m4s) kubelet Node old-k8s-version-069806 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 6m4s (x7 over 6m4s) kubelet Node old-k8s-version-069806 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 6m4s kubelet Updated Node Allocatable limit across pods
Normal Starting 5m51s kube-proxy Starting kube-proxy.
==> dmesg <==
[ +0.001047] FS-Cache: O-key=[8] '0c71ed0000000000'
[ +0.000704] FS-Cache: N-cookie c=00000030 [p=00000027 fl=2 nc=0 na=1]
[ +0.000948] FS-Cache: N-cookie d=0000000057175b43{9p.inode} n=0000000006a19ff2
[ +0.001050] FS-Cache: N-key=[8] '0c71ed0000000000'
[ +0.003590] FS-Cache: Duplicate cookie detected
[ +0.000766] FS-Cache: O-cookie c=0000002a [p=00000027 fl=226 nc=0 na=1]
[ +0.000975] FS-Cache: O-cookie d=0000000057175b43{9p.inode} n=0000000064350dd4
[ +0.001051] FS-Cache: O-key=[8] '0c71ed0000000000'
[ +0.000709] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
[ +0.000957] FS-Cache: N-cookie d=0000000057175b43{9p.inode} n=00000000e3dc1602
[ +0.001084] FS-Cache: N-key=[8] '0c71ed0000000000'
[ +2.766171] FS-Cache: Duplicate cookie detected
[ +0.000781] FS-Cache: O-cookie c=00000028 [p=00000027 fl=226 nc=0 na=1]
[ +0.001056] FS-Cache: O-cookie d=0000000057175b43{9p.inode} n=000000002c3db926
[ +0.001078] FS-Cache: O-key=[8] '0b71ed0000000000'
[ +0.000725] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
[ +0.000971] FS-Cache: N-cookie d=0000000057175b43{9p.inode} n=0000000028100d9c
[ +0.001082] FS-Cache: N-key=[8] '0b71ed0000000000'
[ +0.321844] FS-Cache: Duplicate cookie detected
[ +0.000709] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
[ +0.001033] FS-Cache: O-cookie d=0000000057175b43{9p.inode} n=0000000002b9fe32
[ +0.001056] FS-Cache: O-key=[8] '1171ed0000000000'
[ +0.000710] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
[ +0.000964] FS-Cache: N-cookie d=0000000057175b43{9p.inode} n=0000000006a19ff2
[ +0.001068] FS-Cache: N-key=[8] '1171ed0000000000'
==> etcd [687abee290206c4efc81f20fc7a1ed7bbd6be9374ad6bbb86d674109b406be0c] <==
2024-07-17 20:26:41.877710 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:26:51.876672 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:27:01.876657 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:27:11.875467 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:27:21.874639 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:27:31.876505 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:27:41.877731 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:27:51.874629 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:28:01.876146 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:28:11.878016 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:28:21.874560 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:28:31.879430 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:28:41.876424 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:28:51.877999 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:29:01.875020 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:29:11.875311 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:29:21.874586 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:29:31.875167 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:29:41.878031 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:29:51.874641 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:30:01.874733 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:30:11.878288 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:30:21.888776 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:30:31.874438 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:30:41.874401 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> etcd [e570d714588a47af78dc81d95545a2a3f0ac0ae3acb938112dc2a7230b0a49b1] <==
raft2024/07/17 20:21:37 INFO: ea7e25599daad906 became leader at term 2
raft2024/07/17 20:21:37 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
2024-07-17 20:21:37.804704 I | etcdserver: published {Name:old-k8s-version-069806 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
2024-07-17 20:21:37.804865 I | embed: ready to serve client requests
2024-07-17 20:21:37.811873 I | etcdserver: setting up the initial cluster version to 3.4
2024-07-17 20:21:37.821401 I | embed: serving client requests on 127.0.0.1:2379
2024-07-17 20:21:37.824138 I | embed: ready to serve client requests
2024-07-17 20:21:37.825653 I | embed: serving client requests on 192.168.76.2:2379
2024-07-17 20:21:37.852061 N | etcdserver/membership: set the initial cluster version to 3.4
2024-07-17 20:21:37.852387 I | etcdserver/api: enabled capabilities for version 3.4
2024-07-17 20:21:47.326248 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:21:57.090107 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:22:05.494082 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:22:15.490094 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:22:25.489999 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:22:35.490010 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:22:45.490043 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:22:55.490090 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:23:05.490129 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:23:15.489951 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:23:25.490154 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:23:35.489914 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:23:45.490040 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:23:55.489988 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:24:05.490040 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> kernel <==
20:30:42 up 4:13, 0 users, load average: 2.30, 2.10, 2.60
Linux old-k8s-version-069806 5.15.0-1064-aws #70~20.04.1-Ubuntu SMP Thu Jun 27 14:52:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.4 LTS"
==> kindnet [1f4da85e269235a6b1b8eba0cc55aa66708531b2d77a77c6fd4c1b3d649ff874] <==
E0717 20:29:25.866179 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
I0717 20:29:30.728977 1 main.go:299] Handling node with IPs: map[192.168.76.2:{}]
I0717 20:29:30.729025 1 main.go:303] handling current node
W0717 20:29:33.832009 1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
E0717 20:29:33.832085 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
I0717 20:29:40.729506 1 main.go:299] Handling node with IPs: map[192.168.76.2:{}]
I0717 20:29:40.729711 1 main.go:303] handling current node
I0717 20:29:50.729352 1 main.go:299] Handling node with IPs: map[192.168.76.2:{}]
I0717 20:29:50.729394 1 main.go:303] handling current node
I0717 20:30:00.729073 1 main.go:299] Handling node with IPs: map[192.168.76.2:{}]
I0717 20:30:00.729130 1 main.go:303] handling current node
W0717 20:30:09.957298 1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
E0717 20:30:09.957530 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
I0717 20:30:10.731496 1 main.go:299] Handling node with IPs: map[192.168.76.2:{}]
I0717 20:30:10.731538 1 main.go:303] handling current node
W0717 20:30:11.003551 1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
E0717 20:30:11.003608 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
I0717 20:30:20.729001 1 main.go:299] Handling node with IPs: map[192.168.76.2:{}]
I0717 20:30:20.729175 1 main.go:303] handling current node
W0717 20:30:21.632928 1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
E0717 20:30:21.632969 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
I0717 20:30:30.728862 1 main.go:299] Handling node with IPs: map[192.168.76.2:{}]
I0717 20:30:30.728905 1 main.go:303] handling current node
I0717 20:30:40.729361 1 main.go:299] Handling node with IPs: map[192.168.76.2:{}]
I0717 20:30:40.729393 1 main.go:303] handling current node
==> kindnet [62bdf5d2d244d2f8c5398ebc927cf85a2d9e82aa5dae832ea5ce7dd7bf6e863b] <==
I0717 20:23:07.534044 1 main.go:303] handling current node
W0717 20:23:09.545572 1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
E0717 20:23:09.545610 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
I0717 20:23:17.533866 1 main.go:299] Handling node with IPs: map[192.168.76.2:{}]
I0717 20:23:17.533901 1 main.go:303] handling current node
W0717 20:23:19.477434 1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
E0717 20:23:19.477469 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
W0717 20:23:23.680253 1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
E0717 20:23:23.680290 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
I0717 20:23:27.533116 1 main.go:299] Handling node with IPs: map[192.168.76.2:{}]
I0717 20:23:27.533160 1 main.go:303] handling current node
I0717 20:23:37.533188 1 main.go:299] Handling node with IPs: map[192.168.76.2:{}]
I0717 20:23:37.533224 1 main.go:303] handling current node
W0717 20:23:44.435313 1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
E0717 20:23:44.435350 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
I0717 20:23:47.533359 1 main.go:299] Handling node with IPs: map[192.168.76.2:{}]
I0717 20:23:47.533399 1 main.go:303] handling current node
W0717 20:23:56.746874 1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
E0717 20:23:56.747929 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
I0717 20:23:57.533189 1 main.go:299] Handling node with IPs: map[192.168.76.2:{}]
I0717 20:23:57.533411 1 main.go:303] handling current node
W0717 20:24:04.570249 1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
E0717 20:24:04.570289 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
I0717 20:24:07.533884 1 main.go:299] Handling node with IPs: map[192.168.76.2:{}]
I0717 20:24:07.534116 1 main.go:303] handling current node
==> kube-apiserver [cd37e487a62960309ed82922e0a8a525350123f65c49e5f04cfa975054d7d21c] <==
I0717 20:27:16.168128 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0717 20:27:16.168137 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0717 20:27:47.731993 1 client.go:360] parsed scheme: "passthrough"
I0717 20:27:47.732077 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0717 20:27:47.732087 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0717 20:27:50.138363 1 handler_proxy.go:102] no RequestInfo found in the context
E0717 20:27:50.138443 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0717 20:27:50.138452 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0717 20:28:30.577033 1 client.go:360] parsed scheme: "passthrough"
I0717 20:28:30.577147 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0717 20:28:30.577181 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0717 20:29:05.917192 1 client.go:360] parsed scheme: "passthrough"
I0717 20:29:05.917246 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0717 20:29:05.917255 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0717 20:29:41.464747 1 client.go:360] parsed scheme: "passthrough"
I0717 20:29:41.464847 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0717 20:29:41.464858 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0717 20:29:48.222695 1 handler_proxy.go:102] no RequestInfo found in the context
E0717 20:29:48.222767 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0717 20:29:48.222777 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0717 20:30:23.483727 1 client.go:360] parsed scheme: "passthrough"
I0717 20:30:23.483774 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0717 20:30:23.483782 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-apiserver [f910fbeca2eebc637c9085897f65b8f1c51713acff4ad29cfb0075f1f3ec10bc] <==
I0717 20:21:45.224098 1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
I0717 20:21:45.224123 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0717 20:21:45.892006 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0717 20:21:45.936325 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0717 20:21:46.045126 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
I0717 20:21:46.046455 1 controller.go:606] quota admission added evaluator for: endpoints
I0717 20:21:46.050895 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0717 20:21:46.952537 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0717 20:21:47.474512 1 controller.go:606] quota admission added evaluator for: deployments.apps
I0717 20:21:47.571132 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0717 20:21:55.904410 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0717 20:22:03.215297 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0717 20:22:03.275030 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0717 20:22:12.659749 1 client.go:360] parsed scheme: "passthrough"
I0717 20:22:12.659791 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0717 20:22:12.659801 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0717 20:22:48.872192 1 client.go:360] parsed scheme: "passthrough"
I0717 20:22:48.872240 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0717 20:22:48.872249 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0717 20:23:19.775185 1 client.go:360] parsed scheme: "passthrough"
I0717 20:23:19.775270 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0717 20:23:19.775285 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0717 20:23:56.314804 1 client.go:360] parsed scheme: "passthrough"
I0717 20:23:56.314865 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0717 20:23:56.314874 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-controller-manager [87f12b148cc5c4d4ab511a09ea624cefb8ca24845ab582cb5171cac706cfab08] <==
E0717 20:26:37.722993 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0717 20:26:42.093563 1 request.go:655] Throttling request took 1.049910271s, request: GET:https://192.168.76.2:8443/apis/authentication.k8s.io/v1beta1?timeout=32s
W0717 20:26:42.943787 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0717 20:27:08.225976 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0717 20:27:14.594413 1 request.go:655] Throttling request took 1.04839527s, request: GET:https://192.168.76.2:8443/apis/apiregistration.k8s.io/v1beta1?timeout=32s
W0717 20:27:15.445878 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0717 20:27:38.727661 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0717 20:27:47.096475 1 request.go:655] Throttling request took 1.048199895s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0717 20:27:47.947896 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0717 20:28:09.229462 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0717 20:28:19.600004 1 request.go:655] Throttling request took 1.046873495s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0717 20:28:20.449732 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0717 20:28:39.731470 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0717 20:28:52.100174 1 request.go:655] Throttling request took 1.048282778s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0717 20:28:52.951617 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0717 20:29:10.233283 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0717 20:29:24.602175 1 request.go:655] Throttling request took 1.048601507s, request: GET:https://192.168.76.2:8443/apis/storage.k8s.io/v1beta1?timeout=32s
W0717 20:29:25.453567 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0717 20:29:40.734955 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0717 20:29:57.104162 1 request.go:655] Throttling request took 1.048304942s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0717 20:29:57.955893 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0717 20:30:11.237036 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0717 20:30:29.606323 1 request.go:655] Throttling request took 1.048295414s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
W0717 20:30:30.457834 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0717 20:30:41.739066 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
==> kube-controller-manager [fdcb50d15d02756e4b375373cc89aee7407bf6f49813ed607f9a66e11be2c02f] <==
I0717 20:22:03.246826 1 shared_informer.go:247] Caches are synced for attach detach
I0717 20:22:03.252963 1 shared_informer.go:247] Caches are synced for taint
I0717 20:22:03.253185 1 shared_informer.go:247] Caches are synced for deployment
I0717 20:22:03.253803 1 taint_manager.go:187] Starting NoExecuteTaintManager
I0717 20:22:03.254169 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone:
W0717 20:22:03.254260 1 node_lifecycle_controller.go:1044] Missing timestamp for Node old-k8s-version-069806. Assuming now as a timestamp.
I0717 20:22:03.254320 1 node_lifecycle_controller.go:1245] Controller detected that zone is now in state Normal.
I0717 20:22:03.254666 1 event.go:291] "Event occurred" object="old-k8s-version-069806" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-069806 event: Registered Node old-k8s-version-069806 in Controller"
I0717 20:22:03.254752 1 shared_informer.go:247] Caches are synced for PVC protection
I0717 20:22:03.256987 1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-mv7j6"
I0717 20:22:03.261701 1 shared_informer.go:247] Caches are synced for endpoint_slice
I0717 20:22:03.262296 1 shared_informer.go:247] Caches are synced for resource quota
I0717 20:22:03.300928 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
I0717 20:22:03.321316 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-9djzb"
I0717 20:22:03.344715 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-2whzj"
E0717 20:22:03.383968 1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"267b013e-888f-4d7a-8693-1f0231327565", ResourceVersion:"266", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63856844508, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20240715-585640e9\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001a6ef40), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001a6ef60)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001a6ef80), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string
{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001a6efa0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil),
FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001a6efc0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Glust
erfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001a6efe0), EmptyDi
r:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil),
PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20240715-585640e9", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001a6f000)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001a6f040)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:
0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:
(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40010f3e00), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001246ec8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000acfb20), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}},
HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40000fdc90)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001246f10)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
I0717 20:22:03.414357 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0717 20:22:03.708753 1 shared_informer.go:247] Caches are synced for garbage collector
I0717 20:22:03.708787 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0717 20:22:03.728288 1 shared_informer.go:247] Caches are synced for garbage collector
I0717 20:22:04.580516 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
I0717 20:22:04.596842 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-2whzj"
I0717 20:24:09.266583 1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
I0717 20:24:09.333995 1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-9975d5f86-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
E0717 20:24:09.358388 1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with pods "metrics-server-9975d5f86-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
==> kube-proxy [3a29c0026d94090fe1242ea0503bd2f78ae5aa0004fecb1cea584a49f5ddc1f4] <==
I0717 20:22:05.011638 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I0717 20:22:05.011805 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W0717 20:22:05.041577 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0717 20:22:05.041702 1 server_others.go:185] Using iptables Proxier.
I0717 20:22:05.041959 1 server.go:650] Version: v1.20.0
I0717 20:22:05.042498 1 config.go:315] Starting service config controller
I0717 20:22:05.042512 1 shared_informer.go:240] Waiting for caches to sync for service config
I0717 20:22:05.044120 1 config.go:224] Starting endpoint slice config controller
I0717 20:22:05.044135 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0717 20:22:05.142616 1 shared_informer.go:247] Caches are synced for service config
I0717 20:22:05.144270 1 shared_informer.go:247] Caches are synced for endpoint slice config
==> kube-proxy [4157f24e3104cf922fe5deb02fbefd7da3728b0bdc290a7c09353bca677fd608] <==
I0717 20:24:50.406211 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I0717 20:24:50.406293 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W0717 20:24:50.434128 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0717 20:24:50.434228 1 server_others.go:185] Using iptables Proxier.
I0717 20:24:50.434515 1 server.go:650] Version: v1.20.0
I0717 20:24:50.435119 1 config.go:315] Starting service config controller
I0717 20:24:50.436903 1 shared_informer.go:240] Waiting for caches to sync for service config
I0717 20:24:50.435291 1 config.go:224] Starting endpoint slice config controller
I0717 20:24:50.437200 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0717 20:24:50.538009 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0717 20:24:50.538320 1 shared_informer.go:247] Caches are synced for service config
==> kube-scheduler [bcdcb609ea0d91709769d3d302be011074525901ec1c642b6b5f14c7eb1217ef] <==
W0717 20:21:44.378004 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0717 20:21:44.378089 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0717 20:21:44.437175 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0717 20:21:44.441003 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0717 20:21:44.441032 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0717 20:21:44.441049 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0717 20:21:44.450727 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0717 20:21:44.452322 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0717 20:21:44.452599 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0717 20:21:44.454417 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0717 20:21:44.462991 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0717 20:21:44.463034 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0717 20:21:44.463147 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0717 20:21:44.463245 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0717 20:21:44.463289 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0717 20:21:44.463331 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0717 20:21:44.463364 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0717 20:21:44.463399 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0717 20:21:45.331562 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0717 20:21:45.393811 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0717 20:21:45.418756 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0717 20:21:45.497572 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0717 20:21:45.541203 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0717 20:21:45.606930 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
I0717 20:21:47.341197 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kube-scheduler [d6de24a696fe6c467e60ebcdf508df032838133f4f19467258e96b7f9bfaaf75] <==
I0717 20:24:40.835164 1 serving.go:331] Generated self-signed cert in-memory
W0717 20:24:47.055495 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0717 20:24:47.068599 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0717 20:24:47.071530 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0717 20:24:47.071586 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0717 20:24:47.495196 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0717 20:24:47.496212 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0717 20:24:47.500204 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0717 20:24:47.500293 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0717 20:24:47.636296 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Jul 17 20:29:11 old-k8s-version-069806 kubelet[662]: E0717 20:29:11.874254 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
Jul 17 20:29:12 old-k8s-version-069806 kubelet[662]: E0717 20:29:12.874187 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 17 20:29:26 old-k8s-version-069806 kubelet[662]: I0717 20:29:26.873490 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 3ca5b52c96d52265da32472b962c89e0ca2efe33f6b1148f0a6a9cb84fcd1004
Jul 17 20:29:26 old-k8s-version-069806 kubelet[662]: E0717 20:29:26.873840 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
Jul 17 20:29:27 old-k8s-version-069806 kubelet[662]: E0717 20:29:27.875007 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 17 20:29:38 old-k8s-version-069806 kubelet[662]: I0717 20:29:38.873564 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 3ca5b52c96d52265da32472b962c89e0ca2efe33f6b1148f0a6a9cb84fcd1004
Jul 17 20:29:38 old-k8s-version-069806 kubelet[662]: E0717 20:29:38.874336 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
Jul 17 20:29:39 old-k8s-version-069806 kubelet[662]: E0717 20:29:39.874118 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 17 20:29:49 old-k8s-version-069806 kubelet[662]: I0717 20:29:49.873403 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 3ca5b52c96d52265da32472b962c89e0ca2efe33f6b1148f0a6a9cb84fcd1004
Jul 17 20:29:49 old-k8s-version-069806 kubelet[662]: E0717 20:29:49.873788 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
Jul 17 20:29:53 old-k8s-version-069806 kubelet[662]: E0717 20:29:53.876300 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 17 20:30:02 old-k8s-version-069806 kubelet[662]: I0717 20:30:02.873402 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 3ca5b52c96d52265da32472b962c89e0ca2efe33f6b1148f0a6a9cb84fcd1004
Jul 17 20:30:02 old-k8s-version-069806 kubelet[662]: E0717 20:30:02.874275 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
Jul 17 20:30:07 old-k8s-version-069806 kubelet[662]: E0717 20:30:07.874813 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 17 20:30:14 old-k8s-version-069806 kubelet[662]: I0717 20:30:14.873455 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 3ca5b52c96d52265da32472b962c89e0ca2efe33f6b1148f0a6a9cb84fcd1004
Jul 17 20:30:14 old-k8s-version-069806 kubelet[662]: E0717 20:30:14.873785 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
Jul 17 20:30:22 old-k8s-version-069806 kubelet[662]: E0717 20:30:22.882718 662 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Jul 17 20:30:22 old-k8s-version-069806 kubelet[662]: E0717 20:30:22.883145 662 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Jul 17 20:30:22 old-k8s-version-069806 kubelet[662]: E0717 20:30:22.883375 662 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>} BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-fksnv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc
0-33f0-47c6-a0e7-879ddc760c3c): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Jul 17 20:30:22 old-k8s-version-069806 kubelet[662]: E0717 20:30:22.883566 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Jul 17 20:30:25 old-k8s-version-069806 kubelet[662]: I0717 20:30:25.882213 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 3ca5b52c96d52265da32472b962c89e0ca2efe33f6b1148f0a6a9cb84fcd1004
Jul 17 20:30:25 old-k8s-version-069806 kubelet[662]: E0717 20:30:25.883025 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
Jul 17 20:30:33 old-k8s-version-069806 kubelet[662]: E0717 20:30:33.875704 662 pod_workers.go:191] Error syncing pod f6c18fc0-33f0-47c6-a0e7-879ddc760c3c ("metrics-server-9975d5f86-v4sfl_kube-system(f6c18fc0-33f0-47c6-a0e7-879ddc760c3c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 17 20:30:39 old-k8s-version-069806 kubelet[662]: I0717 20:30:39.873556 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 3ca5b52c96d52265da32472b962c89e0ca2efe33f6b1148f0a6a9cb84fcd1004
Jul 17 20:30:39 old-k8s-version-069806 kubelet[662]: E0717 20:30:39.874438 662 pod_workers.go:191] Error syncing pod a167cb8b-0936-4b2c-ac47-7a22fb30f359 ("dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-pmx8k_kubernetes-dashboard(a167cb8b-0936-4b2c-ac47-7a22fb30f359)"
==> kubernetes-dashboard [44895de2582221beb2740dcf68a96bd5aab3c26b61e7fd33bace4c9538e555f1] <==
2024/07/17 20:25:12 Using namespace: kubernetes-dashboard
2024/07/17 20:25:12 Using in-cluster config to connect to apiserver
2024/07/17 20:25:12 Using secret token for csrf signing
2024/07/17 20:25:12 Initializing csrf token from kubernetes-dashboard-csrf secret
2024/07/17 20:25:12 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2024/07/17 20:25:12 Successful initial request to the apiserver, version: v1.20.0
2024/07/17 20:25:12 Generating JWE encryption key
2024/07/17 20:25:12 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2024/07/17 20:25:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2024/07/17 20:25:14 Initializing JWE encryption key from synchronized object
2024/07/17 20:25:14 Creating in-cluster Sidecar client
2024/07/17 20:25:14 Serving insecurely on HTTP port: 9090
2024/07/17 20:25:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/17 20:25:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/17 20:26:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/17 20:26:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/17 20:27:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/17 20:27:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/17 20:28:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/17 20:28:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/17 20:29:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/17 20:29:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/17 20:30:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/17 20:25:12 Starting overwatch
==> storage-provisioner [57b729d79f17dbcfa3bb0c46429504d8399588e217c6bd8ca2e38947ff5d4cf0] <==
I0717 20:25:32.088638 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0717 20:25:32.124659 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0717 20:25:32.124711 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0717 20:25:49.596791 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0717 20:25:49.596881 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fbbce96e-3799-4a46-809b-860f843b678e", APIVersion:"v1", ResourceVersion:"856", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-069806_465facb7-aa0a-4114-a984-49ab52f96775 became leader
I0717 20:25:49.597493 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-069806_465facb7-aa0a-4114-a984-49ab52f96775!
I0717 20:25:49.697673 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-069806_465facb7-aa0a-4114-a984-49ab52f96775!
==> storage-provisioner [89d02f9d8a4af9b6c7a8ebd5d1b7850c32625b8302d57fba3cc86f4f4b3513f5] <==
I0717 20:24:49.505148 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0717 20:25:19.507458 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-069806 -n old-k8s-version-069806
helpers_test.go:261: (dbg) Run: kubectl --context old-k8s-version-069806 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-v4sfl
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context old-k8s-version-069806 describe pod metrics-server-9975d5f86-v4sfl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-069806 describe pod metrics-server-9975d5f86-v4sfl: exit status 1 (93.895995ms)
** stderr **
Error from server (NotFound): pods "metrics-server-9975d5f86-v4sfl" not found
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-069806 describe pod metrics-server-9975d5f86-v4sfl: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (381.31s)