=== RUN TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run: out/minikube-linux-arm64 start -p old-k8s-version-469910 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0
E0329 17:06:25.537809 7781 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20470-2310/.minikube/profiles/functional-842970/client.crt: no such file or directory" logger="UnhandledError"
E0329 17:06:39.976765 7781 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20470-2310/.minikube/profiles/addons-128892/client.crt: no such file or directory" logger="UnhandledError"
E0329 17:08:22.465496 7781 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20470-2310/.minikube/profiles/functional-842970/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-469910 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m9.306570403s)
-- stdout --
* [old-k8s-version-469910] minikube v1.35.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=20470
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/20470-2310/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/20470-2310/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
* Using the docker driver based on existing profile
* Starting "old-k8s-version-469910" primary control-plane node in "old-k8s-version-469910" cluster
* Pulling base image v0.0.46-1741860993-20523 ...
* Restarting existing docker container for "old-k8s-version-469910" ...
* Preparing Kubernetes v1.20.0 on containerd 1.7.25 ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
- Using image docker.io/kubernetesui/dashboard:v2.7.0
- Using image registry.k8s.io/echoserver:1.4
- Using image fake.domain/registry.k8s.io/echoserver:1.4
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-469910 addons enable metrics-server
* Enabled addons: metrics-server, default-storageclass, storage-provisioner, dashboard
-- /stdout --
** stderr **
I0329 17:04:56.607016 215882 out.go:345] Setting OutFile to fd 1 ...
I0329 17:04:56.607269 215882 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0329 17:04:56.607311 215882 out.go:358] Setting ErrFile to fd 2...
I0329 17:04:56.607332 215882 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0329 17:04:56.607659 215882 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20470-2310/.minikube/bin
I0329 17:04:56.608130 215882 out.go:352] Setting JSON to false
I0329 17:04:56.609105 215882 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6447,"bootTime":1743261450,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1080-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
I0329 17:04:56.609202 215882 start.go:139] virtualization:
I0329 17:04:56.614321 215882 out.go:177] * [old-k8s-version-469910] minikube v1.35.0 on Ubuntu 20.04 (arm64)
I0329 17:04:56.618326 215882 notify.go:220] Checking for updates...
I0329 17:04:56.620091 215882 out.go:177] - MINIKUBE_LOCATION=20470
I0329 17:04:56.623266 215882 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0329 17:04:56.626085 215882 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20470-2310/kubeconfig
I0329 17:04:56.628925 215882 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20470-2310/.minikube
I0329 17:04:56.631752 215882 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0329 17:04:56.636428 215882 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0329 17:04:56.640417 215882 config.go:182] Loaded profile config "old-k8s-version-469910": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0329 17:04:56.645098 215882 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
I0329 17:04:56.648211 215882 driver.go:394] Setting default libvirt URI to qemu:///system
I0329 17:04:56.691821 215882 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
I0329 17:04:56.691939 215882 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0329 17:04:56.797699 215882 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:51 SystemTime:2025-03-29 17:04:56.787463197 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1080-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:753481ec61c7c8955a23d6ff7bc8e4daed455734 Expected:753481ec61c7c8955a23d6ff7bc8e4daed455734} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0329 17:04:56.797825 215882 docker.go:318] overlay module found
I0329 17:04:56.801429 215882 out.go:177] * Using the docker driver based on existing profile
I0329 17:04:56.804178 215882 start.go:297] selected driver: docker
I0329 17:04:56.804202 215882 start.go:901] validating driver "docker" against &{Name:old-k8s-version-469910 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-469910 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0329 17:04:56.804306 215882 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0329 17:04:56.805010 215882 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0329 17:04:56.892010 215882 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-03-29 17:04:56.882383179 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1080-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:753481ec61c7c8955a23d6ff7bc8e4daed455734 Expected:753481ec61c7c8955a23d6ff7bc8e4daed455734} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0329 17:04:56.892349 215882 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0329 17:04:56.892387 215882 cni.go:84] Creating CNI manager for ""
I0329 17:04:56.892444 215882 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0329 17:04:56.892484 215882 start.go:340] cluster config:
{Name:old-k8s-version-469910 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-469910 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0329 17:04:56.895801 215882 out.go:177] * Starting "old-k8s-version-469910" primary control-plane node in "old-k8s-version-469910" cluster
I0329 17:04:56.898580 215882 cache.go:121] Beginning downloading kic base image for docker with containerd
I0329 17:04:56.901613 215882 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
I0329 17:04:56.904585 215882 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0329 17:04:56.904635 215882 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20470-2310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
I0329 17:04:56.904645 215882 cache.go:56] Caching tarball of preloaded images
I0329 17:04:56.904733 215882 preload.go:172] Found /home/jenkins/minikube-integration/20470-2310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0329 17:04:56.904741 215882 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
I0329 17:04:56.904855 215882 profile.go:143] Saving config to /home/jenkins/minikube-integration/20470-2310/.minikube/profiles/old-k8s-version-469910/config.json ...
I0329 17:04:56.905077 215882 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
I0329 17:04:56.938147 215882 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
I0329 17:04:56.938174 215882 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
I0329 17:04:56.938194 215882 cache.go:230] Successfully downloaded all kic artifacts
I0329 17:04:56.938217 215882 start.go:360] acquireMachinesLock for old-k8s-version-469910: {Name:mk66565c4480e09583f58bdb0b5b90464d641e73 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0329 17:04:56.938291 215882 start.go:364] duration metric: took 47.868µs to acquireMachinesLock for "old-k8s-version-469910"
I0329 17:04:56.938317 215882 start.go:96] Skipping create...Using existing machine configuration
I0329 17:04:56.938332 215882 fix.go:54] fixHost starting:
I0329 17:04:56.938625 215882 cli_runner.go:164] Run: docker container inspect old-k8s-version-469910 --format={{.State.Status}}
I0329 17:04:56.959533 215882 fix.go:112] recreateIfNeeded on old-k8s-version-469910: state=Stopped err=<nil>
W0329 17:04:56.959560 215882 fix.go:138] unexpected machine state, will restart: <nil>
I0329 17:04:56.963058 215882 out.go:177] * Restarting existing docker container for "old-k8s-version-469910" ...
I0329 17:04:56.968699 215882 cli_runner.go:164] Run: docker start old-k8s-version-469910
I0329 17:04:57.299793 215882 cli_runner.go:164] Run: docker container inspect old-k8s-version-469910 --format={{.State.Status}}
I0329 17:04:57.334671 215882 kic.go:430] container "old-k8s-version-469910" state is running.
I0329 17:04:57.336197 215882 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-469910
I0329 17:04:57.368943 215882 profile.go:143] Saving config to /home/jenkins/minikube-integration/20470-2310/.minikube/profiles/old-k8s-version-469910/config.json ...
I0329 17:04:57.369159 215882 machine.go:93] provisionDockerMachine start ...
I0329 17:04:57.369220 215882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-469910
I0329 17:04:57.407743 215882 main.go:141] libmachine: Using SSH client type: native
I0329 17:04:57.408065 215882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33068 <nil> <nil>}
I0329 17:04:57.408074 215882 main.go:141] libmachine: About to run SSH command:
hostname
I0329 17:04:57.408732 215882 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45580->127.0.0.1:33068: read: connection reset by peer
I0329 17:05:00.543905 215882 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-469910
I0329 17:05:00.543956 215882 ubuntu.go:169] provisioning hostname "old-k8s-version-469910"
I0329 17:05:00.544049 215882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-469910
I0329 17:05:00.575991 215882 main.go:141] libmachine: Using SSH client type: native
I0329 17:05:00.576285 215882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33068 <nil> <nil>}
I0329 17:05:00.576296 215882 main.go:141] libmachine: About to run SSH command:
sudo hostname old-k8s-version-469910 && echo "old-k8s-version-469910" | sudo tee /etc/hostname
I0329 17:05:00.721776 215882 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-469910
I0329 17:05:00.721858 215882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-469910
I0329 17:05:00.745711 215882 main.go:141] libmachine: Using SSH client type: native
I0329 17:05:00.746012 215882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33068 <nil> <nil>}
I0329 17:05:00.746030 215882 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-469910' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-469910/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-469910' | sudo tee -a /etc/hosts;
fi
fi
I0329 17:05:00.871597 215882 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0329 17:05:00.871631 215882 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20470-2310/.minikube CaCertPath:/home/jenkins/minikube-integration/20470-2310/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20470-2310/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20470-2310/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20470-2310/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20470-2310/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20470-2310/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20470-2310/.minikube}
I0329 17:05:00.871660 215882 ubuntu.go:177] setting up certificates
I0329 17:05:00.871673 215882 provision.go:84] configureAuth start
I0329 17:05:00.871757 215882 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-469910
I0329 17:05:00.899739 215882 provision.go:143] copyHostCerts
I0329 17:05:00.899816 215882 exec_runner.go:144] found /home/jenkins/minikube-integration/20470-2310/.minikube/cert.pem, removing ...
I0329 17:05:00.899840 215882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20470-2310/.minikube/cert.pem
I0329 17:05:00.899904 215882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20470-2310/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20470-2310/.minikube/cert.pem (1123 bytes)
I0329 17:05:00.900017 215882 exec_runner.go:144] found /home/jenkins/minikube-integration/20470-2310/.minikube/key.pem, removing ...
I0329 17:05:00.900030 215882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20470-2310/.minikube/key.pem
I0329 17:05:00.900059 215882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20470-2310/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20470-2310/.minikube/key.pem (1679 bytes)
I0329 17:05:00.900132 215882 exec_runner.go:144] found /home/jenkins/minikube-integration/20470-2310/.minikube/ca.pem, removing ...
I0329 17:05:00.900143 215882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20470-2310/.minikube/ca.pem
I0329 17:05:00.900166 215882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20470-2310/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20470-2310/.minikube/ca.pem (1082 bytes)
I0329 17:05:00.900234 215882 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20470-2310/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20470-2310/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20470-2310/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-469910 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-469910]
I0329 17:05:01.526902 215882 provision.go:177] copyRemoteCerts
I0329 17:05:01.526974 215882 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0329 17:05:01.527024 215882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-469910
I0329 17:05:01.546649 215882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20470-2310/.minikube/machines/old-k8s-version-469910/id_rsa Username:docker}
I0329 17:05:01.643340 215882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2310/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I0329 17:05:01.679506 215882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2310/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0329 17:05:01.720918 215882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2310/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0329 17:05:01.751753 215882 provision.go:87] duration metric: took 880.047135ms to configureAuth
I0329 17:05:01.751779 215882 ubuntu.go:193] setting minikube options for container-runtime
I0329 17:05:01.751979 215882 config.go:182] Loaded profile config "old-k8s-version-469910": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0329 17:05:01.751989 215882 machine.go:96] duration metric: took 4.382823005s to provisionDockerMachine
I0329 17:05:01.751997 215882 start.go:293] postStartSetup for "old-k8s-version-469910" (driver="docker")
I0329 17:05:01.752007 215882 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0329 17:05:01.752055 215882 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0329 17:05:01.752093 215882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-469910
I0329 17:05:01.778964 215882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20470-2310/.minikube/machines/old-k8s-version-469910/id_rsa Username:docker}
I0329 17:05:01.871253 215882 ssh_runner.go:195] Run: cat /etc/os-release
I0329 17:05:01.874961 215882 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0329 17:05:01.875002 215882 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0329 17:05:01.875035 215882 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0329 17:05:01.875049 215882 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0329 17:05:01.875062 215882 filesync.go:126] Scanning /home/jenkins/minikube-integration/20470-2310/.minikube/addons for local assets ...
I0329 17:05:01.875158 215882 filesync.go:126] Scanning /home/jenkins/minikube-integration/20470-2310/.minikube/files for local assets ...
I0329 17:05:01.875282 215882 filesync.go:149] local asset: /home/jenkins/minikube-integration/20470-2310/.minikube/files/etc/ssl/certs/77812.pem -> 77812.pem in /etc/ssl/certs
I0329 17:05:01.875482 215882 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0329 17:05:01.884943 215882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2310/.minikube/files/etc/ssl/certs/77812.pem --> /etc/ssl/certs/77812.pem (1708 bytes)
I0329 17:05:01.913509 215882 start.go:296] duration metric: took 161.498703ms for postStartSetup
I0329 17:05:01.913591 215882 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0329 17:05:01.913646 215882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-469910
I0329 17:05:01.937134 215882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20470-2310/.minikube/machines/old-k8s-version-469910/id_rsa Username:docker}
I0329 17:05:02.033996 215882 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0329 17:05:02.038962 215882 fix.go:56] duration metric: took 5.100629293s for fixHost
I0329 17:05:02.039001 215882 start.go:83] releasing machines lock for "old-k8s-version-469910", held for 5.10068412s
I0329 17:05:02.039083 215882 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-469910
I0329 17:05:02.065525 215882 ssh_runner.go:195] Run: cat /version.json
I0329 17:05:02.065580 215882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-469910
I0329 17:05:02.065834 215882 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0329 17:05:02.065893 215882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-469910
I0329 17:05:02.101066 215882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20470-2310/.minikube/machines/old-k8s-version-469910/id_rsa Username:docker}
I0329 17:05:02.110042 215882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20470-2310/.minikube/machines/old-k8s-version-469910/id_rsa Username:docker}
I0329 17:05:02.352750 215882 ssh_runner.go:195] Run: systemctl --version
I0329 17:05:02.357314 215882 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0329 17:05:02.361971 215882 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0329 17:05:02.382886 215882 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0329 17:05:02.383011 215882 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0329 17:05:02.393329 215882 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0329 17:05:02.393410 215882 start.go:498] detecting cgroup driver to use...
I0329 17:05:02.393458 215882 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0329 17:05:02.393538 215882 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0329 17:05:02.414532 215882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0329 17:05:02.431837 215882 docker.go:217] disabling cri-docker service (if available) ...
I0329 17:05:02.431953 215882 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0329 17:05:02.449336 215882 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0329 17:05:02.463808 215882 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0329 17:05:02.570758 215882 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0329 17:05:02.684899 215882 docker.go:233] disabling docker service ...
I0329 17:05:02.685032 215882 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0329 17:05:02.698700 215882 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0329 17:05:02.711252 215882 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0329 17:05:02.823024 215882 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0329 17:05:02.933164 215882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0329 17:05:02.948162 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0329 17:05:02.970392 215882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I0329 17:05:02.982396 215882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0329 17:05:02.993132 215882 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0329 17:05:02.993208 215882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0329 17:05:03.004607 215882 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0329 17:05:03.014293 215882 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0329 17:05:03.028313 215882 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0329 17:05:03.041356 215882 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0329 17:05:03.052558 215882 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0329 17:05:03.070651 215882 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0329 17:05:03.081428 215882 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0329 17:05:03.093049 215882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0329 17:05:03.204895 215882 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0329 17:05:03.421896 215882 start.go:545] Will wait 60s for socket path /run/containerd/containerd.sock
I0329 17:05:03.421961 215882 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0329 17:05:03.426345 215882 start.go:566] Will wait 60s for crictl version
I0329 17:05:03.426422 215882 ssh_runner.go:195] Run: which crictl
I0329 17:05:03.429782 215882 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0329 17:05:03.495555 215882 start.go:582] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.25
RuntimeApiVersion: v1
I0329 17:05:03.495636 215882 ssh_runner.go:195] Run: containerd --version
I0329 17:05:03.520340 215882 ssh_runner.go:195] Run: containerd --version
I0329 17:05:03.555713 215882 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.25 ...
I0329 17:05:03.558702 215882 cli_runner.go:164] Run: docker network inspect old-k8s-version-469910 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0329 17:05:03.584169 215882 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I0329 17:05:03.588561 215882 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0329 17:05:03.604191 215882 kubeadm.go:883] updating cluster {Name:old-k8s-version-469910 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-469910 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0329 17:05:03.604310 215882 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0329 17:05:03.604369 215882 ssh_runner.go:195] Run: sudo crictl images --output json
I0329 17:05:03.672077 215882 containerd.go:627] all images are preloaded for containerd runtime.
I0329 17:05:03.672102 215882 containerd.go:534] Images already preloaded, skipping extraction
I0329 17:05:03.672156 215882 ssh_runner.go:195] Run: sudo crictl images --output json
I0329 17:05:03.737749 215882 containerd.go:627] all images are preloaded for containerd runtime.
I0329 17:05:03.737775 215882 cache_images.go:84] Images are preloaded, skipping loading
I0329 17:05:03.737783 215882 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
I0329 17:05:03.737891 215882 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-469910 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-469910 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0329 17:05:03.737956 215882 ssh_runner.go:195] Run: sudo crictl info
I0329 17:05:03.817455 215882 cni.go:84] Creating CNI manager for ""
I0329 17:05:03.817475 215882 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0329 17:05:03.817484 215882 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0329 17:05:03.817504 215882 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-469910 NodeName:old-k8s-version-469910 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0329 17:05:03.817660 215882 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "old-k8s-version-469910"
kubeletExtraArgs:
node-ip: 192.168.76.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0329 17:05:03.817755 215882 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I0329 17:05:03.829755 215882 binaries.go:44] Found k8s binaries, skipping transfer
I0329 17:05:03.829875 215882 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0329 17:05:03.841435 215882 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
I0329 17:05:03.862550 215882 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0329 17:05:03.895020 215882 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
I0329 17:05:03.916970 215882 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I0329 17:05:03.920835 215882 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0329 17:05:03.932391 215882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0329 17:05:04.045392 215882 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0329 17:05:04.064417 215882 certs.go:68] Setting up /home/jenkins/minikube-integration/20470-2310/.minikube/profiles/old-k8s-version-469910 for IP: 192.168.76.2
I0329 17:05:04.064442 215882 certs.go:194] generating shared ca certs ...
I0329 17:05:04.064458 215882 certs.go:226] acquiring lock for ca certs: {Name:mkd8f35c7fbd9d32ba41be2af2d591b6aa6cf234 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0329 17:05:04.064650 215882 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20470-2310/.minikube/ca.key
I0329 17:05:04.064730 215882 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20470-2310/.minikube/proxy-client-ca.key
I0329 17:05:04.064747 215882 certs.go:256] generating profile certs ...
I0329 17:05:04.064864 215882 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20470-2310/.minikube/profiles/old-k8s-version-469910/client.key
I0329 17:05:04.064967 215882 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20470-2310/.minikube/profiles/old-k8s-version-469910/apiserver.key.bb0ec0db
I0329 17:05:04.065042 215882 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20470-2310/.minikube/profiles/old-k8s-version-469910/proxy-client.key
I0329 17:05:04.065197 215882 certs.go:484] found cert: /home/jenkins/minikube-integration/20470-2310/.minikube/certs/7781.pem (1338 bytes)
W0329 17:05:04.065259 215882 certs.go:480] ignoring /home/jenkins/minikube-integration/20470-2310/.minikube/certs/7781_empty.pem, impossibly tiny 0 bytes
I0329 17:05:04.065276 215882 certs.go:484] found cert: /home/jenkins/minikube-integration/20470-2310/.minikube/certs/ca-key.pem (1675 bytes)
I0329 17:05:04.065320 215882 certs.go:484] found cert: /home/jenkins/minikube-integration/20470-2310/.minikube/certs/ca.pem (1082 bytes)
I0329 17:05:04.065373 215882 certs.go:484] found cert: /home/jenkins/minikube-integration/20470-2310/.minikube/certs/cert.pem (1123 bytes)
I0329 17:05:04.065427 215882 certs.go:484] found cert: /home/jenkins/minikube-integration/20470-2310/.minikube/certs/key.pem (1679 bytes)
I0329 17:05:04.065511 215882 certs.go:484] found cert: /home/jenkins/minikube-integration/20470-2310/.minikube/files/etc/ssl/certs/77812.pem (1708 bytes)
I0329 17:05:04.066371 215882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2310/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0329 17:05:04.108256 215882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2310/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0329 17:05:04.153855 215882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2310/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0329 17:05:04.237358 215882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2310/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0329 17:05:04.320868 215882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2310/.minikube/profiles/old-k8s-version-469910/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
I0329 17:05:04.365355 215882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2310/.minikube/profiles/old-k8s-version-469910/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0329 17:05:04.400689 215882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2310/.minikube/profiles/old-k8s-version-469910/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0329 17:05:04.437996 215882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2310/.minikube/profiles/old-k8s-version-469910/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0329 17:05:04.468941 215882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2310/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0329 17:05:04.513000 215882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2310/.minikube/certs/7781.pem --> /usr/share/ca-certificates/7781.pem (1338 bytes)
I0329 17:05:04.619303 215882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2310/.minikube/files/etc/ssl/certs/77812.pem --> /usr/share/ca-certificates/77812.pem (1708 bytes)
I0329 17:05:04.674063 215882 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0329 17:05:04.723683 215882 ssh_runner.go:195] Run: openssl version
I0329 17:05:04.733335 215882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0329 17:05:04.744204 215882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0329 17:05:04.752854 215882 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 29 16:18 /usr/share/ca-certificates/minikubeCA.pem
I0329 17:05:04.752928 215882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0329 17:05:04.764966 215882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0329 17:05:04.777186 215882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7781.pem && ln -fs /usr/share/ca-certificates/7781.pem /etc/ssl/certs/7781.pem"
I0329 17:05:04.789988 215882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7781.pem
I0329 17:05:04.796809 215882 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 29 16:26 /usr/share/ca-certificates/7781.pem
I0329 17:05:04.796879 215882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7781.pem
I0329 17:05:04.808090 215882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7781.pem /etc/ssl/certs/51391683.0"
I0329 17:05:04.821171 215882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77812.pem && ln -fs /usr/share/ca-certificates/77812.pem /etc/ssl/certs/77812.pem"
I0329 17:05:04.833776 215882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77812.pem
I0329 17:05:04.842780 215882 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 29 16:26 /usr/share/ca-certificates/77812.pem
I0329 17:05:04.842874 215882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77812.pem
I0329 17:05:04.852544 215882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/77812.pem /etc/ssl/certs/3ec20f2e.0"
I0329 17:05:04.866268 215882 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0329 17:05:04.871876 215882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0329 17:05:04.889757 215882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0329 17:05:04.902746 215882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0329 17:05:04.916881 215882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0329 17:05:04.928412 215882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0329 17:05:04.943041 215882 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0329 17:05:04.953278 215882 kubeadm.go:392] StartCluster: {Name:old-k8s-version-469910 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-469910 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0329 17:05:04.953422 215882 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0329 17:05:04.953479 215882 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0329 17:05:05.009956 215882 cri.go:89] found id: "a5dfee4a506c778834f309546822471bfe29cb70606442c7cda067bc889ec4e8"
I0329 17:05:05.009985 215882 cri.go:89] found id: "d73c4b171565d73ace6405634046441c894c4d53c7ca6b54394fc1f03f94bf95"
I0329 17:05:05.009990 215882 cri.go:89] found id: "802f6ad6eeb21bd6d3225b140f9b7239645251f3f5ec2dee169cf08eefad071b"
I0329 17:05:05.009993 215882 cri.go:89] found id: "2bb6df8707154298dfc0cb21f5c505ece8764779eb903346bffb88181622549c"
I0329 17:05:05.009997 215882 cri.go:89] found id: "45b6c3befe403c57c64897456aea1d1f627af2619be6ae801ddef2b592be0f0b"
I0329 17:05:05.010002 215882 cri.go:89] found id: "5844d741e22ff09ae2c803a73a957b04b60d9c6d7c529313cb014e7d6aa2cd2b"
I0329 17:05:05.010006 215882 cri.go:89] found id: "e4211e5b58844aac9df57b0f72dd7ef968a74d18917ec3d0dc6bca362a5d010f"
I0329 17:05:05.010009 215882 cri.go:89] found id: "fdf64c9da80b16dd615c8a65bf20e7dbdac57ddd63ab9ea71557f869e4214e70"
I0329 17:05:05.010012 215882 cri.go:89] found id: ""
I0329 17:05:05.010066 215882 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
W0329 17:05:05.029476 215882 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-03-29T17:05:05Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I0329 17:05:05.029571 215882 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0329 17:05:05.042152 215882 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0329 17:05:05.042190 215882 kubeadm.go:593] restartPrimaryControlPlane start ...
I0329 17:05:05.042248 215882 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0329 17:05:05.054546 215882 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0329 17:05:05.055144 215882 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-469910" does not appear in /home/jenkins/minikube-integration/20470-2310/kubeconfig
I0329 17:05:05.055498 215882 kubeconfig.go:62] /home/jenkins/minikube-integration/20470-2310/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-469910" cluster setting kubeconfig missing "old-k8s-version-469910" context setting]
I0329 17:05:05.055990 215882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20470-2310/kubeconfig: {Name:mk67c59b90eac0925d283f0bd0edd038ba6c7c26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0329 17:05:05.057513 215882 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0329 17:05:05.067520 215882 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
I0329 17:05:05.067550 215882 kubeadm.go:597] duration metric: took 25.354003ms to restartPrimaryControlPlane
I0329 17:05:05.067559 215882 kubeadm.go:394] duration metric: took 114.298488ms to StartCluster
I0329 17:05:05.067573 215882 settings.go:142] acquiring lock: {Name:mk0e5c956c90ea91a9d840799eff947964a7a98c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0329 17:05:05.067629 215882 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20470-2310/kubeconfig
I0329 17:05:05.069204 215882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20470-2310/kubeconfig: {Name:mk67c59b90eac0925d283f0bd0edd038ba6c7c26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0329 17:05:05.069870 215882 start.go:238] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0329 17:05:05.070972 215882 config.go:182] Loaded profile config "old-k8s-version-469910": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0329 17:05:05.071026 215882 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0329 17:05:05.071176 215882 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-469910"
I0329 17:05:05.071193 215882 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-469910"
W0329 17:05:05.071200 215882 addons.go:247] addon storage-provisioner should already be in state true
I0329 17:05:05.071224 215882 host.go:66] Checking if "old-k8s-version-469910" exists ...
I0329 17:05:05.072821 215882 cli_runner.go:164] Run: docker container inspect old-k8s-version-469910 --format={{.State.Status}}
I0329 17:05:05.075336 215882 addons.go:69] Setting dashboard=true in profile "old-k8s-version-469910"
I0329 17:05:05.075407 215882 addons.go:238] Setting addon dashboard=true in "old-k8s-version-469910"
W0329 17:05:05.075419 215882 addons.go:247] addon dashboard should already be in state true
I0329 17:05:05.075447 215882 host.go:66] Checking if "old-k8s-version-469910" exists ...
I0329 17:05:05.076072 215882 cli_runner.go:164] Run: docker container inspect old-k8s-version-469910 --format={{.State.Status}}
I0329 17:05:05.076343 215882 out.go:177] * Verifying Kubernetes components...
I0329 17:05:05.076589 215882 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-469910"
I0329 17:05:05.076613 215882 addons.go:238] Setting addon metrics-server=true in "old-k8s-version-469910"
W0329 17:05:05.076634 215882 addons.go:247] addon metrics-server should already be in state true
I0329 17:05:05.076659 215882 host.go:66] Checking if "old-k8s-version-469910" exists ...
I0329 17:05:05.077151 215882 cli_runner.go:164] Run: docker container inspect old-k8s-version-469910 --format={{.State.Status}}
I0329 17:05:05.078263 215882 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-469910"
I0329 17:05:05.078285 215882 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-469910"
I0329 17:05:05.078609 215882 cli_runner.go:164] Run: docker container inspect old-k8s-version-469910 --format={{.State.Status}}
I0329 17:05:05.092166 215882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0329 17:05:05.132189 215882 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0329 17:05:05.138839 215882 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0329 17:05:05.138865 215882 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0329 17:05:05.138934 215882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-469910
I0329 17:05:05.172488 215882 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-469910"
W0329 17:05:05.172512 215882 addons.go:247] addon default-storageclass should already be in state true
I0329 17:05:05.172536 215882 host.go:66] Checking if "old-k8s-version-469910" exists ...
I0329 17:05:05.172948 215882 cli_runner.go:164] Run: docker container inspect old-k8s-version-469910 --format={{.State.Status}}
I0329 17:05:05.184044 215882 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0329 17:05:05.187128 215882 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0329 17:05:05.193305 215882 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0329 17:05:05.193332 215882 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0329 17:05:05.193397 215882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-469910
I0329 17:05:05.203854 215882 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0329 17:05:05.206762 215882 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0329 17:05:05.206790 215882 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0329 17:05:05.206866 215882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-469910
I0329 17:05:05.231517 215882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20470-2310/.minikube/machines/old-k8s-version-469910/id_rsa Username:docker}
I0329 17:05:05.237416 215882 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0329 17:05:05.237435 215882 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0329 17:05:05.237501 215882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-469910
I0329 17:05:05.290467 215882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20470-2310/.minikube/machines/old-k8s-version-469910/id_rsa Username:docker}
I0329 17:05:05.290985 215882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20470-2310/.minikube/machines/old-k8s-version-469910/id_rsa Username:docker}
I0329 17:05:05.303666 215882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20470-2310/.minikube/machines/old-k8s-version-469910/id_rsa Username:docker}
I0329 17:05:05.320896 215882 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0329 17:05:05.346132 215882 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-469910" to be "Ready" ...
I0329 17:05:05.460569 215882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0329 17:05:05.597342 215882 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0329 17:05:05.597370 215882 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0329 17:05:05.675938 215882 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0329 17:05:05.675963 215882 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0329 17:05:05.689525 215882 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0329 17:05:05.689549 215882 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0329 17:05:05.691821 215882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
W0329 17:05:05.762962 215882 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:05.763008 215882 retry.go:31] will retry after 260.978636ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:05.803599 215882 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0329 17:05:05.803645 215882 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0329 17:05:05.820623 215882 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0329 17:05:05.820687 215882 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0329 17:05:05.883236 215882 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0329 17:05:05.883257 215882 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0329 17:05:05.898174 215882 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0329 17:05:05.898202 215882 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
W0329 17:05:05.952987 215882 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:05.953033 215882 retry.go:31] will retry after 226.359651ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:05.964922 215882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0329 17:05:05.972432 215882 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0329 17:05:05.972457 215882 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0329 17:05:06.013899 215882 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0329 17:05:06.013965 215882 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0329 17:05:06.025186 215882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0329 17:05:06.105833 215882 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0329 17:05:06.105897 215882 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0329 17:05:06.180107 215882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0329 17:05:06.214721 215882 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0329 17:05:06.214793 215882 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
W0329 17:05:06.275804 215882 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:06.275894 215882 retry.go:31] will retry after 170.940433ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0329 17:05:06.318400 215882 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:06.318493 215882 retry.go:31] will retry after 454.278765ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:06.326721 215882 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0329 17:05:06.326798 215882 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0329 17:05:06.398966 215882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0329 17:05:06.447769 215882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0329 17:05:06.451255 215882 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:06.451333 215882 retry.go:31] will retry after 349.009558ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0329 17:05:06.562492 215882 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:06.562585 215882 retry.go:31] will retry after 163.823509ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0329 17:05:06.657699 215882 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:06.657781 215882 retry.go:31] will retry after 237.654333ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:06.727000 215882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0329 17:05:06.773477 215882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0329 17:05:06.800929 215882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0329 17:05:06.895781 215882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0329 17:05:06.986522 215882 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:06.986599 215882 retry.go:31] will retry after 315.558583ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0329 17:05:07.084633 215882 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:07.084705 215882 retry.go:31] will retry after 638.607265ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0329 17:05:07.163864 215882 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:07.163964 215882 retry.go:31] will retry after 838.455163ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0329 17:05:07.224684 215882 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:07.224757 215882 retry.go:31] will retry after 672.951103ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:07.303066 215882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0329 17:05:07.346630 215882 node_ready.go:53] error getting node "old-k8s-version-469910": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-469910": dial tcp 192.168.76.2:8443: connect: connection refused
W0329 17:05:07.431897 215882 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:07.431973 215882 retry.go:31] will retry after 542.512968ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:07.724488 215882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0329 17:05:07.858782 215882 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:07.858868 215882 retry.go:31] will retry after 862.754772ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:07.898079 215882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0329 17:05:07.975493 215882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0329 17:05:08.002890 215882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0329 17:05:08.075795 215882 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:08.075908 215882 retry.go:31] will retry after 691.52459ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0329 17:05:08.206463 215882 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:08.206561 215882 retry.go:31] will retry after 1.180361826s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0329 17:05:08.241750 215882 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:08.241841 215882 retry.go:31] will retry after 724.312266ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:08.721933 215882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0329 17:05:08.768318 215882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0329 17:05:08.924875 215882 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:08.924974 215882 retry.go:31] will retry after 894.658546ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0329 17:05:08.939527 215882 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:08.939603 215882 retry.go:31] will retry after 1.11678522s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:08.966893 215882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0329 17:05:09.096295 215882 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:09.096376 215882 retry.go:31] will retry after 1.802454915s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:09.346932 215882 node_ready.go:53] error getting node "old-k8s-version-469910": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-469910": dial tcp 192.168.76.2:8443: connect: connection refused
I0329 17:05:09.387201 215882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0329 17:05:09.534571 215882 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:09.534599 215882 retry.go:31] will retry after 1.114461438s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:09.820153 215882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0329 17:05:09.948586 215882 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:09.948619 215882 retry.go:31] will retry after 2.607278158s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:10.056916 215882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0329 17:05:10.204767 215882 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:10.204797 215882 retry.go:31] will retry after 2.489584414s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:10.649958 215882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0329 17:05:10.785621 215882 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:10.785652 215882 retry.go:31] will retry after 2.764593961s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:10.899416 215882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0329 17:05:11.042492 215882 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:11.042520 215882 retry.go:31] will retry after 1.319803075s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:11.347276 215882 node_ready.go:53] error getting node "old-k8s-version-469910": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-469910": dial tcp 192.168.76.2:8443: connect: connection refused
I0329 17:05:12.362565 215882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0329 17:05:12.499549 215882 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:12.499577 215882 retry.go:31] will retry after 3.499553883s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:12.556807 215882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0329 17:05:12.695333 215882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0329 17:05:12.732556 215882 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:12.732585 215882 retry.go:31] will retry after 1.596238053s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0329 17:05:12.919154 215882 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:12.919182 215882 retry.go:31] will retry after 3.035806102s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:13.551164 215882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0329 17:05:13.846657 215882 node_ready.go:53] error getting node "old-k8s-version-469910": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-469910": dial tcp 192.168.76.2:8443: connect: connection refused
W0329 17:05:13.882728 215882 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:13.882761 215882 retry.go:31] will retry after 3.074494752s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:14.328976 215882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0329 17:05:14.790167 215882 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:14.790195 215882 retry.go:31] will retry after 5.996259693s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:05:15.955494 215882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0329 17:05:16.000135 215882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0329 17:05:16.958231 215882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0329 17:05:20.787642 215882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0329 17:05:23.777423 215882 node_ready.go:49] node "old-k8s-version-469910" has status "Ready":"True"
I0329 17:05:23.777444 215882 node_ready.go:38] duration metric: took 18.431268785s for node "old-k8s-version-469910" to be "Ready" ...
I0329 17:05:23.777455 215882 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0329 17:05:24.215994 215882 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-svwdf" in "kube-system" namespace to be "Ready" ...
I0329 17:05:24.421132 215882 pod_ready.go:93] pod "coredns-74ff55c5b-svwdf" in "kube-system" namespace has status "Ready":"True"
I0329 17:05:24.421151 215882 pod_ready.go:82] duration metric: took 205.130775ms for pod "coredns-74ff55c5b-svwdf" in "kube-system" namespace to be "Ready" ...
I0329 17:05:24.421164 215882 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-469910" in "kube-system" namespace to be "Ready" ...
I0329 17:05:26.130952 215882 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.175407954s)
I0329 17:05:26.130984 215882 addons.go:479] Verifying addon metrics-server=true in "old-k8s-version-469910"
I0329 17:05:26.131029 215882 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (10.130866869s)
I0329 17:05:26.441034 215882 pod_ready.go:103] pod "etcd-old-k8s-version-469910" in "kube-system" namespace has status "Ready":"False"
I0329 17:05:26.461263 215882 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.502987453s)
I0329 17:05:26.461579 215882 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.673913333s)
I0329 17:05:26.465222 215882 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-469910 addons enable metrics-server
I0329 17:05:26.468333 215882 out.go:177] * Enabled addons: metrics-server, default-storageclass, storage-provisioner, dashboard
I0329 17:05:26.471452 215882 addons.go:514] duration metric: took 21.400418154s for enable addons: enabled=[metrics-server default-storageclass storage-provisioner dashboard]
I0329 17:05:28.932384 215882 pod_ready.go:103] pod "etcd-old-k8s-version-469910" in "kube-system" namespace has status "Ready":"False"
I0329 17:05:30.946040 215882 pod_ready.go:103] pod "etcd-old-k8s-version-469910" in "kube-system" namespace has status "Ready":"False"
I0329 17:05:33.427450 215882 pod_ready.go:103] pod "etcd-old-k8s-version-469910" in "kube-system" namespace has status "Ready":"False"
I0329 17:05:35.927980 215882 pod_ready.go:103] pod "etcd-old-k8s-version-469910" in "kube-system" namespace has status "Ready":"False"
I0329 17:05:38.426702 215882 pod_ready.go:103] pod "etcd-old-k8s-version-469910" in "kube-system" namespace has status "Ready":"False"
I0329 17:05:40.427475 215882 pod_ready.go:103] pod "etcd-old-k8s-version-469910" in "kube-system" namespace has status "Ready":"False"
I0329 17:05:42.471001 215882 pod_ready.go:103] pod "etcd-old-k8s-version-469910" in "kube-system" namespace has status "Ready":"False"
I0329 17:05:44.927163 215882 pod_ready.go:103] pod "etcd-old-k8s-version-469910" in "kube-system" namespace has status "Ready":"False"
I0329 17:05:46.935957 215882 pod_ready.go:103] pod "etcd-old-k8s-version-469910" in "kube-system" namespace has status "Ready":"False"
I0329 17:05:49.428097 215882 pod_ready.go:103] pod "etcd-old-k8s-version-469910" in "kube-system" namespace has status "Ready":"False"
I0329 17:05:51.429320 215882 pod_ready.go:103] pod "etcd-old-k8s-version-469910" in "kube-system" namespace has status "Ready":"False"
I0329 17:05:53.929045 215882 pod_ready.go:103] pod "etcd-old-k8s-version-469910" in "kube-system" namespace has status "Ready":"False"
I0329 17:05:56.427526 215882 pod_ready.go:103] pod "etcd-old-k8s-version-469910" in "kube-system" namespace has status "Ready":"False"
I0329 17:05:58.927585 215882 pod_ready.go:103] pod "etcd-old-k8s-version-469910" in "kube-system" namespace has status "Ready":"False"
I0329 17:06:00.932007 215882 pod_ready.go:103] pod "etcd-old-k8s-version-469910" in "kube-system" namespace has status "Ready":"False"
I0329 17:06:02.933430 215882 pod_ready.go:103] pod "etcd-old-k8s-version-469910" in "kube-system" namespace has status "Ready":"False"
I0329 17:06:05.428301 215882 pod_ready.go:103] pod "etcd-old-k8s-version-469910" in "kube-system" namespace has status "Ready":"False"
I0329 17:06:07.953014 215882 pod_ready.go:103] pod "etcd-old-k8s-version-469910" in "kube-system" namespace has status "Ready":"False"
I0329 17:06:10.426614 215882 pod_ready.go:103] pod "etcd-old-k8s-version-469910" in "kube-system" namespace has status "Ready":"False"
I0329 17:06:12.428017 215882 pod_ready.go:103] pod "etcd-old-k8s-version-469910" in "kube-system" namespace has status "Ready":"False"
I0329 17:06:14.933865 215882 pod_ready.go:103] pod "etcd-old-k8s-version-469910" in "kube-system" namespace has status "Ready":"False"
I0329 17:06:17.426922 215882 pod_ready.go:103] pod "etcd-old-k8s-version-469910" in "kube-system" namespace has status "Ready":"False"
I0329 17:06:19.427136 215882 pod_ready.go:103] pod "etcd-old-k8s-version-469910" in "kube-system" namespace has status "Ready":"False"
I0329 17:06:21.956516 215882 pod_ready.go:103] pod "etcd-old-k8s-version-469910" in "kube-system" namespace has status "Ready":"False"
I0329 17:06:24.427338 215882 pod_ready.go:103] pod "etcd-old-k8s-version-469910" in "kube-system" namespace has status "Ready":"False"
I0329 17:06:25.426542 215882 pod_ready.go:93] pod "etcd-old-k8s-version-469910" in "kube-system" namespace has status "Ready":"True"
I0329 17:06:25.426567 215882 pod_ready.go:82] duration metric: took 1m1.00539505s for pod "etcd-old-k8s-version-469910" in "kube-system" namespace to be "Ready" ...
I0329 17:06:25.426583 215882 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-469910" in "kube-system" namespace to be "Ready" ...
I0329 17:06:25.430458 215882 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-469910" in "kube-system" namespace has status "Ready":"True"
I0329 17:06:25.430482 215882 pod_ready.go:82] duration metric: took 3.889247ms for pod "kube-apiserver-old-k8s-version-469910" in "kube-system" namespace to be "Ready" ...
I0329 17:06:25.430496 215882 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-469910" in "kube-system" namespace to be "Ready" ...
I0329 17:06:27.436821 215882 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-469910" in "kube-system" namespace has status "Ready":"False"
I0329 17:06:29.437087 215882 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-469910" in "kube-system" namespace has status "Ready":"False"
I0329 17:06:31.936458 215882 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-469910" in "kube-system" namespace has status "Ready":"False"
I0329 17:06:34.012534 215882 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-469910" in "kube-system" namespace has status "Ready":"False"
I0329 17:06:36.436762 215882 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-469910" in "kube-system" namespace has status "Ready":"False"
I0329 17:06:38.936529 215882 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-469910" in "kube-system" namespace has status "Ready":"False"
I0329 17:06:41.435852 215882 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-469910" in "kube-system" namespace has status "Ready":"False"
I0329 17:06:42.436894 215882 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-469910" in "kube-system" namespace has status "Ready":"True"
I0329 17:06:42.436921 215882 pod_ready.go:82] duration metric: took 17.006417611s for pod "kube-controller-manager-old-k8s-version-469910" in "kube-system" namespace to be "Ready" ...
I0329 17:06:42.436933 215882 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wcgkr" in "kube-system" namespace to be "Ready" ...
I0329 17:06:42.441128 215882 pod_ready.go:93] pod "kube-proxy-wcgkr" in "kube-system" namespace has status "Ready":"True"
I0329 17:06:42.441155 215882 pod_ready.go:82] duration metric: took 4.214213ms for pod "kube-proxy-wcgkr" in "kube-system" namespace to be "Ready" ...
I0329 17:06:42.441167 215882 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-469910" in "kube-system" namespace to be "Ready" ...
I0329 17:06:42.444816 215882 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-469910" in "kube-system" namespace has status "Ready":"True"
I0329 17:06:42.444840 215882 pod_ready.go:82] duration metric: took 3.664807ms for pod "kube-scheduler-old-k8s-version-469910" in "kube-system" namespace to be "Ready" ...
I0329 17:06:42.444853 215882 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace to be "Ready" ...
I0329 17:06:44.457420 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:06:46.952086 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:06:49.453495 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:06:51.950671 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:06:54.452434 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:06:56.456857 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:06:58.951246 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:07:00.951504 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:07:03.450830 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:07:05.451055 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:07:07.950026 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:07:09.962407 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:07:12.451308 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:07:14.950341 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:07:16.955152 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:07:19.450256 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:07:21.450490 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:07:23.450584 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:07:25.450851 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:07:27.950633 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:07:30.455103 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:07:32.950497 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:07:35.450678 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:07:37.451185 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:07:39.950139 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:07:41.950486 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:07:43.955984 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:07:46.450336 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:07:48.950504 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:07:50.950971 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:07:53.450670 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:07:55.950147 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:07:57.950573 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:08:00.449480 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:08:02.449830 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:08:04.449991 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:08:06.453192 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:08:08.456538 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:08:10.950772 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:08:13.450943 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:08:15.950530 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:08:17.951050 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:08:20.450975 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:08:22.950192 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:08:24.950672 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:08:27.450082 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:08:29.450422 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:08:31.950711 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:08:33.950753 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:08:36.452779 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:08:38.950167 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:08:41.451337 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:08:43.951327 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:08:46.450324 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:08:48.450824 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:08:50.950402 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:08:52.951311 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:08:55.450833 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:08:57.451319 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:08:59.949937 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:09:01.950814 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:09:04.450868 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:09:06.949988 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:09:08.950175 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:09:10.950243 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:09:12.952321 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:09:15.449640 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:09:17.456520 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:09:19.957807 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:09:22.450162 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:09:24.950461 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:09:27.450360 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:09:29.450858 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:09:31.949726 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:09:33.950610 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:09:36.463944 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:09:38.967621 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:09:41.451674 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:09:43.452152 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:09:45.950250 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:09:47.950277 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:09:49.951445 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:09:51.951576 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:09:53.955411 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:09:56.460632 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:09:58.951145 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:10:00.951287 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:10:03.451229 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:10:05.950346 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:10:07.950716 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:10:10.451101 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:10:12.951764 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:10:15.450467 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:10:17.451339 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:10:19.950158 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:10:22.450064 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:10:24.450974 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:10:26.949819 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:10:28.950438 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:10:31.450530 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:10:33.450562 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:10:35.951915 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:10:38.450109 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:10:40.951765 215882 pod_ready.go:103] pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace has status "Ready":"False"
I0329 17:10:42.450745 215882 pod_ready.go:82] duration metric: took 4m0.005878706s for pod "metrics-server-9975d5f86-k7v6g" in "kube-system" namespace to be "Ready" ...
E0329 17:10:42.450772 215882 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I0329 17:10:42.450782 215882 pod_ready.go:39] duration metric: took 5m18.673315066s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0329 17:10:42.450796 215882 api_server.go:52] waiting for apiserver process to appear ...
I0329 17:10:42.450839 215882 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0329 17:10:42.450903 215882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0329 17:10:42.489392 215882 cri.go:89] found id: "1d665425da32470bd2630153351ec20d276fad049da85eaa1032a6def1a1deff"
I0329 17:10:42.489418 215882 cri.go:89] found id: "e4211e5b58844aac9df57b0f72dd7ef968a74d18917ec3d0dc6bca362a5d010f"
I0329 17:10:42.489424 215882 cri.go:89] found id: ""
I0329 17:10:42.489431 215882 logs.go:282] 2 containers: [1d665425da32470bd2630153351ec20d276fad049da85eaa1032a6def1a1deff e4211e5b58844aac9df57b0f72dd7ef968a74d18917ec3d0dc6bca362a5d010f]
I0329 17:10:42.489487 215882 ssh_runner.go:195] Run: which crictl
I0329 17:10:42.493210 215882 ssh_runner.go:195] Run: which crictl
I0329 17:10:42.496658 215882 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0329 17:10:42.496772 215882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0329 17:10:42.541639 215882 cri.go:89] found id: "e8ee39792992c3690c7a1594f566f69559236cf0a1ffa535c6ae2e183727988d"
I0329 17:10:42.541662 215882 cri.go:89] found id: "45b6c3befe403c57c64897456aea1d1f627af2619be6ae801ddef2b592be0f0b"
I0329 17:10:42.541668 215882 cri.go:89] found id: ""
I0329 17:10:42.541675 215882 logs.go:282] 2 containers: [e8ee39792992c3690c7a1594f566f69559236cf0a1ffa535c6ae2e183727988d 45b6c3befe403c57c64897456aea1d1f627af2619be6ae801ddef2b592be0f0b]
I0329 17:10:42.541735 215882 ssh_runner.go:195] Run: which crictl
I0329 17:10:42.546286 215882 ssh_runner.go:195] Run: which crictl
I0329 17:10:42.549432 215882 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0329 17:10:42.549506 215882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0329 17:10:42.589461 215882 cri.go:89] found id: "b1d8c9bdcf51275b8ee260ff27008e50a288380eee492e2ea65a17eecd633a05"
I0329 17:10:42.589483 215882 cri.go:89] found id: "a5dfee4a506c778834f309546822471bfe29cb70606442c7cda067bc889ec4e8"
I0329 17:10:42.589488 215882 cri.go:89] found id: ""
I0329 17:10:42.589500 215882 logs.go:282] 2 containers: [b1d8c9bdcf51275b8ee260ff27008e50a288380eee492e2ea65a17eecd633a05 a5dfee4a506c778834f309546822471bfe29cb70606442c7cda067bc889ec4e8]
I0329 17:10:42.589555 215882 ssh_runner.go:195] Run: which crictl
I0329 17:10:42.592866 215882 ssh_runner.go:195] Run: which crictl
I0329 17:10:42.596093 215882 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0329 17:10:42.596177 215882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0329 17:10:42.635862 215882 cri.go:89] found id: "bfdb0b4297e36c3796a7a426259c1feba5b7a1a067614b3d35556d6bdbbc76ee"
I0329 17:10:42.635883 215882 cri.go:89] found id: "5844d741e22ff09ae2c803a73a957b04b60d9c6d7c529313cb014e7d6aa2cd2b"
I0329 17:10:42.635889 215882 cri.go:89] found id: ""
I0329 17:10:42.635896 215882 logs.go:282] 2 containers: [bfdb0b4297e36c3796a7a426259c1feba5b7a1a067614b3d35556d6bdbbc76ee 5844d741e22ff09ae2c803a73a957b04b60d9c6d7c529313cb014e7d6aa2cd2b]
I0329 17:10:42.635951 215882 ssh_runner.go:195] Run: which crictl
I0329 17:10:42.639445 215882 ssh_runner.go:195] Run: which crictl
I0329 17:10:42.642826 215882 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0329 17:10:42.642901 215882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0329 17:10:42.686637 215882 cri.go:89] found id: "476f8a391def914e7a67fa8cc7c10883e946e9e4bdd7cef65ed70f98df3ef191"
I0329 17:10:42.686724 215882 cri.go:89] found id: "2bb6df8707154298dfc0cb21f5c505ece8764779eb903346bffb88181622549c"
I0329 17:10:42.686745 215882 cri.go:89] found id: ""
I0329 17:10:42.686768 215882 logs.go:282] 2 containers: [476f8a391def914e7a67fa8cc7c10883e946e9e4bdd7cef65ed70f98df3ef191 2bb6df8707154298dfc0cb21f5c505ece8764779eb903346bffb88181622549c]
I0329 17:10:42.686862 215882 ssh_runner.go:195] Run: which crictl
I0329 17:10:42.690401 215882 ssh_runner.go:195] Run: which crictl
I0329 17:10:42.693894 215882 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0329 17:10:42.694018 215882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0329 17:10:42.736074 215882 cri.go:89] found id: "476bef2c0ac3db5deaf24b4ec3339f3ab38a00eace7a38e235325e571ed11ea1"
I0329 17:10:42.736098 215882 cri.go:89] found id: "fdf64c9da80b16dd615c8a65bf20e7dbdac57ddd63ab9ea71557f869e4214e70"
I0329 17:10:42.736105 215882 cri.go:89] found id: ""
I0329 17:10:42.736112 215882 logs.go:282] 2 containers: [476bef2c0ac3db5deaf24b4ec3339f3ab38a00eace7a38e235325e571ed11ea1 fdf64c9da80b16dd615c8a65bf20e7dbdac57ddd63ab9ea71557f869e4214e70]
I0329 17:10:42.736193 215882 ssh_runner.go:195] Run: which crictl
I0329 17:10:42.739742 215882 ssh_runner.go:195] Run: which crictl
I0329 17:10:42.743112 215882 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0329 17:10:42.743186 215882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0329 17:10:42.780669 215882 cri.go:89] found id: "08cd64e5efe8ff52775b40477442e26a8cdba2671752a9c947ffb54af011e505"
I0329 17:10:42.780694 215882 cri.go:89] found id: "d73c4b171565d73ace6405634046441c894c4d53c7ca6b54394fc1f03f94bf95"
I0329 17:10:42.780700 215882 cri.go:89] found id: ""
I0329 17:10:42.780707 215882 logs.go:282] 2 containers: [08cd64e5efe8ff52775b40477442e26a8cdba2671752a9c947ffb54af011e505 d73c4b171565d73ace6405634046441c894c4d53c7ca6b54394fc1f03f94bf95]
I0329 17:10:42.780765 215882 ssh_runner.go:195] Run: which crictl
I0329 17:10:42.784937 215882 ssh_runner.go:195] Run: which crictl
I0329 17:10:42.788332 215882 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0329 17:10:42.788408 215882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0329 17:10:42.824614 215882 cri.go:89] found id: "9affe02cbdfd87007f6bb996c748c3449799a41720b75c90d252c62fcb927af2"
I0329 17:10:42.824635 215882 cri.go:89] found id: ""
I0329 17:10:42.824643 215882 logs.go:282] 1 containers: [9affe02cbdfd87007f6bb996c748c3449799a41720b75c90d252c62fcb927af2]
I0329 17:10:42.824702 215882 ssh_runner.go:195] Run: which crictl
I0329 17:10:42.828062 215882 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0329 17:10:42.828157 215882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0329 17:10:42.875423 215882 cri.go:89] found id: "7170455291cf3cb3b0f76ac6cd41db4b1dae2328482597589a839fe7a7e8e9a2"
I0329 17:10:42.875444 215882 cri.go:89] found id: "c332b5510a0e67142842621140541b4ab72255b20ae34d6867fef0ea4307b24b"
I0329 17:10:42.875449 215882 cri.go:89] found id: ""
I0329 17:10:42.875456 215882 logs.go:282] 2 containers: [7170455291cf3cb3b0f76ac6cd41db4b1dae2328482597589a839fe7a7e8e9a2 c332b5510a0e67142842621140541b4ab72255b20ae34d6867fef0ea4307b24b]
I0329 17:10:42.875530 215882 ssh_runner.go:195] Run: which crictl
I0329 17:10:42.879565 215882 ssh_runner.go:195] Run: which crictl
I0329 17:10:42.886206 215882 logs.go:123] Gathering logs for kubelet ...
I0329 17:10:42.886243 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0329 17:10:42.946046 215882 logs.go:138] Found kubelet problem: Mar 29 17:05:23 old-k8s-version-469910 kubelet[662]: E0329 17:05:23.715259 662 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-469910" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-469910' and this object
W0329 17:10:42.946287 215882 logs.go:138] Found kubelet problem: Mar 29 17:05:23 old-k8s-version-469910 kubelet[662]: E0329 17:05:23.715674 662 reflector.go:138] object-"kube-system"/"kube-proxy-token-pmvfp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-pmvfp" is forbidden: User "system:node:old-k8s-version-469910" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-469910' and this object
W0329 17:10:42.946518 215882 logs.go:138] Found kubelet problem: Mar 29 17:05:23 old-k8s-version-469910 kubelet[662]: E0329 17:05:23.767001 662 reflector.go:138] object-"kube-system"/"storage-provisioner-token-rwmwb": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-rwmwb" is forbidden: User "system:node:old-k8s-version-469910" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-469910' and this object
W0329 17:10:42.946731 215882 logs.go:138] Found kubelet problem: Mar 29 17:05:23 old-k8s-version-469910 kubelet[662]: E0329 17:05:23.767092 662 reflector.go:138] object-"kube-system"/"kindnet-token-m4t4x": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-m4t4x" is forbidden: User "system:node:old-k8s-version-469910" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-469910' and this object
W0329 17:10:42.946939 215882 logs.go:138] Found kubelet problem: Mar 29 17:05:23 old-k8s-version-469910 kubelet[662]: E0329 17:05:23.767148 662 reflector.go:138] object-"default"/"default-token-xbb7l": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-xbb7l" is forbidden: User "system:node:old-k8s-version-469910" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-469910' and this object
W0329 17:10:42.947182 215882 logs.go:138] Found kubelet problem: Mar 29 17:05:23 old-k8s-version-469910 kubelet[662]: E0329 17:05:23.767202 662 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-469910" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-469910' and this object
W0329 17:10:42.947406 215882 logs.go:138] Found kubelet problem: Mar 29 17:05:23 old-k8s-version-469910 kubelet[662]: E0329 17:05:23.767251 662 reflector.go:138] object-"kube-system"/"coredns-token-b8sn7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-b8sn7" is forbidden: User "system:node:old-k8s-version-469910" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-469910' and this object
W0329 17:10:42.947629 215882 logs.go:138] Found kubelet problem: Mar 29 17:05:23 old-k8s-version-469910 kubelet[662]: E0329 17:05:23.767297 662 reflector.go:138] object-"kube-system"/"metrics-server-token-fmlf6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-fmlf6" is forbidden: User "system:node:old-k8s-version-469910" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-469910' and this object
W0329 17:10:42.955257 215882 logs.go:138] Found kubelet problem: Mar 29 17:05:27 old-k8s-version-469910 kubelet[662]: E0329 17:05:27.314655 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0329 17:10:42.955483 215882 logs.go:138] Found kubelet problem: Mar 29 17:05:28 old-k8s-version-469910 kubelet[662]: E0329 17:05:28.220496 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:42.958252 215882 logs.go:138] Found kubelet problem: Mar 29 17:05:39 old-k8s-version-469910 kubelet[662]: E0329 17:05:39.951360 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0329 17:10:42.960197 215882 logs.go:138] Found kubelet problem: Mar 29 17:05:52 old-k8s-version-469910 kubelet[662]: E0329 17:05:52.313089 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:42.961010 215882 logs.go:138] Found kubelet problem: Mar 29 17:05:53 old-k8s-version-469910 kubelet[662]: E0329 17:05:53.327259 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:42.961200 215882 logs.go:138] Found kubelet problem: Mar 29 17:05:54 old-k8s-version-469910 kubelet[662]: E0329 17:05:54.925664 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:42.961529 215882 logs.go:138] Found kubelet problem: Mar 29 17:05:57 old-k8s-version-469910 kubelet[662]: E0329 17:05:57.710953 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:42.961966 215882 logs.go:138] Found kubelet problem: Mar 29 17:05:58 old-k8s-version-469910 kubelet[662]: E0329 17:05:58.342508 662 pod_workers.go:191] Error syncing pod 59a53811-f6b6-411a-a123-792d40062106 ("storage-provisioner_kube-system(59a53811-f6b6-411a-a123-792d40062106)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(59a53811-f6b6-411a-a123-792d40062106)"
W0329 17:10:42.962890 215882 logs.go:138] Found kubelet problem: Mar 29 17:06:08 old-k8s-version-469910 kubelet[662]: E0329 17:06:08.376633 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:42.965415 215882 logs.go:138] Found kubelet problem: Mar 29 17:06:08 old-k8s-version-469910 kubelet[662]: E0329 17:06:08.946979 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0329 17:10:42.965882 215882 logs.go:138] Found kubelet problem: Mar 29 17:06:17 old-k8s-version-469910 kubelet[662]: E0329 17:06:17.711547 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:42.966069 215882 logs.go:138] Found kubelet problem: Mar 29 17:06:21 old-k8s-version-469910 kubelet[662]: E0329 17:06:21.924650 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:42.966666 215882 logs.go:138] Found kubelet problem: Mar 29 17:06:31 old-k8s-version-469910 kubelet[662]: E0329 17:06:31.441423 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:42.966849 215882 logs.go:138] Found kubelet problem: Mar 29 17:06:33 old-k8s-version-469910 kubelet[662]: E0329 17:06:33.924999 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:42.967175 215882 logs.go:138] Found kubelet problem: Mar 29 17:06:37 old-k8s-version-469910 kubelet[662]: E0329 17:06:37.711494 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:42.967358 215882 logs.go:138] Found kubelet problem: Mar 29 17:06:46 old-k8s-version-469910 kubelet[662]: E0329 17:06:46.924781 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:42.967731 215882 logs.go:138] Found kubelet problem: Mar 29 17:06:50 old-k8s-version-469910 kubelet[662]: E0329 17:06:50.924485 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:42.970185 215882 logs.go:138] Found kubelet problem: Mar 29 17:06:59 old-k8s-version-469910 kubelet[662]: E0329 17:06:59.935763 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0329 17:10:42.970517 215882 logs.go:138] Found kubelet problem: Mar 29 17:07:05 old-k8s-version-469910 kubelet[662]: E0329 17:07:05.924374 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:42.970700 215882 logs.go:138] Found kubelet problem: Mar 29 17:07:13 old-k8s-version-469910 kubelet[662]: E0329 17:07:13.924843 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:42.971286 215882 logs.go:138] Found kubelet problem: Mar 29 17:07:17 old-k8s-version-469910 kubelet[662]: E0329 17:07:17.575307 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:42.971645 215882 logs.go:138] Found kubelet problem: Mar 29 17:07:18 old-k8s-version-469910 kubelet[662]: E0329 17:07:18.580848 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:42.971831 215882 logs.go:138] Found kubelet problem: Mar 29 17:07:24 old-k8s-version-469910 kubelet[662]: E0329 17:07:24.925331 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:42.972160 215882 logs.go:138] Found kubelet problem: Mar 29 17:07:30 old-k8s-version-469910 kubelet[662]: E0329 17:07:30.924368 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:42.972343 215882 logs.go:138] Found kubelet problem: Mar 29 17:07:37 old-k8s-version-469910 kubelet[662]: E0329 17:07:37.924941 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:42.972680 215882 logs.go:138] Found kubelet problem: Mar 29 17:07:41 old-k8s-version-469910 kubelet[662]: E0329 17:07:41.924437 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:42.972863 215882 logs.go:138] Found kubelet problem: Mar 29 17:07:50 old-k8s-version-469910 kubelet[662]: E0329 17:07:50.926604 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:42.973187 215882 logs.go:138] Found kubelet problem: Mar 29 17:07:55 old-k8s-version-469910 kubelet[662]: E0329 17:07:55.924374 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:42.973370 215882 logs.go:138] Found kubelet problem: Mar 29 17:08:03 old-k8s-version-469910 kubelet[662]: E0329 17:08:03.925408 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:42.973698 215882 logs.go:138] Found kubelet problem: Mar 29 17:08:09 old-k8s-version-469910 kubelet[662]: E0329 17:08:09.924355 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:42.973881 215882 logs.go:138] Found kubelet problem: Mar 29 17:08:14 old-k8s-version-469910 kubelet[662]: E0329 17:08:14.924851 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:42.974211 215882 logs.go:138] Found kubelet problem: Mar 29 17:08:23 old-k8s-version-469910 kubelet[662]: E0329 17:08:23.924360 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:42.976670 215882 logs.go:138] Found kubelet problem: Mar 29 17:08:25 old-k8s-version-469910 kubelet[662]: E0329 17:08:25.935664 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0329 17:10:42.977003 215882 logs.go:138] Found kubelet problem: Mar 29 17:08:36 old-k8s-version-469910 kubelet[662]: E0329 17:08:36.924770 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:42.977188 215882 logs.go:138] Found kubelet problem: Mar 29 17:08:37 old-k8s-version-469910 kubelet[662]: E0329 17:08:37.924679 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:42.977808 215882 logs.go:138] Found kubelet problem: Mar 29 17:08:49 old-k8s-version-469910 kubelet[662]: E0329 17:08:49.806140 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:42.978027 215882 logs.go:138] Found kubelet problem: Mar 29 17:08:49 old-k8s-version-469910 kubelet[662]: E0329 17:08:49.924719 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:42.978364 215882 logs.go:138] Found kubelet problem: Mar 29 17:08:57 old-k8s-version-469910 kubelet[662]: E0329 17:08:57.711479 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:42.978549 215882 logs.go:138] Found kubelet problem: Mar 29 17:09:02 old-k8s-version-469910 kubelet[662]: E0329 17:09:02.927605 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:42.978879 215882 logs.go:138] Found kubelet problem: Mar 29 17:09:09 old-k8s-version-469910 kubelet[662]: E0329 17:09:09.925706 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:42.979066 215882 logs.go:138] Found kubelet problem: Mar 29 17:09:14 old-k8s-version-469910 kubelet[662]: E0329 17:09:14.926280 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:42.979410 215882 logs.go:138] Found kubelet problem: Mar 29 17:09:22 old-k8s-version-469910 kubelet[662]: E0329 17:09:22.924663 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:42.979598 215882 logs.go:138] Found kubelet problem: Mar 29 17:09:27 old-k8s-version-469910 kubelet[662]: E0329 17:09:27.924793 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:42.979924 215882 logs.go:138] Found kubelet problem: Mar 29 17:09:37 old-k8s-version-469910 kubelet[662]: E0329 17:09:37.924460 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:42.980117 215882 logs.go:138] Found kubelet problem: Mar 29 17:09:38 old-k8s-version-469910 kubelet[662]: E0329 17:09:38.926782 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:42.980452 215882 logs.go:138] Found kubelet problem: Mar 29 17:09:48 old-k8s-version-469910 kubelet[662]: E0329 17:09:48.928853 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:42.980638 215882 logs.go:138] Found kubelet problem: Mar 29 17:09:49 old-k8s-version-469910 kubelet[662]: E0329 17:09:49.925066 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:42.980964 215882 logs.go:138] Found kubelet problem: Mar 29 17:10:02 old-k8s-version-469910 kubelet[662]: E0329 17:10:02.926703 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:42.981147 215882 logs.go:138] Found kubelet problem: Mar 29 17:10:02 old-k8s-version-469910 kubelet[662]: E0329 17:10:02.928431 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:42.981338 215882 logs.go:138] Found kubelet problem: Mar 29 17:10:15 old-k8s-version-469910 kubelet[662]: E0329 17:10:15.925755 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:42.981664 215882 logs.go:138] Found kubelet problem: Mar 29 17:10:16 old-k8s-version-469910 kubelet[662]: E0329 17:10:16.924947 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:42.981998 215882 logs.go:138] Found kubelet problem: Mar 29 17:10:27 old-k8s-version-469910 kubelet[662]: E0329 17:10:27.924876 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:42.982191 215882 logs.go:138] Found kubelet problem: Mar 29 17:10:28 old-k8s-version-469910 kubelet[662]: E0329 17:10:28.924927 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:42.982526 215882 logs.go:138] Found kubelet problem: Mar 29 17:10:39 old-k8s-version-469910 kubelet[662]: E0329 17:10:39.925004 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:42.982710 215882 logs.go:138] Found kubelet problem: Mar 29 17:10:40 old-k8s-version-469910 kubelet[662]: E0329 17:10:40.928277 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0329 17:10:42.982721 215882 logs.go:123] Gathering logs for kube-apiserver [1d665425da32470bd2630153351ec20d276fad049da85eaa1032a6def1a1deff] ...
I0329 17:10:42.982735 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d665425da32470bd2630153351ec20d276fad049da85eaa1032a6def1a1deff"
I0329 17:10:43.065922 215882 logs.go:123] Gathering logs for etcd [e8ee39792992c3690c7a1594f566f69559236cf0a1ffa535c6ae2e183727988d] ...
I0329 17:10:43.065960 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8ee39792992c3690c7a1594f566f69559236cf0a1ffa535c6ae2e183727988d"
I0329 17:10:43.110479 215882 logs.go:123] Gathering logs for etcd [45b6c3befe403c57c64897456aea1d1f627af2619be6ae801ddef2b592be0f0b] ...
I0329 17:10:43.110509 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45b6c3befe403c57c64897456aea1d1f627af2619be6ae801ddef2b592be0f0b"
I0329 17:10:43.164451 215882 logs.go:123] Gathering logs for coredns [b1d8c9bdcf51275b8ee260ff27008e50a288380eee492e2ea65a17eecd633a05] ...
I0329 17:10:43.164482 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1d8c9bdcf51275b8ee260ff27008e50a288380eee492e2ea65a17eecd633a05"
I0329 17:10:43.211550 215882 logs.go:123] Gathering logs for kube-scheduler [5844d741e22ff09ae2c803a73a957b04b60d9c6d7c529313cb014e7d6aa2cd2b] ...
I0329 17:10:43.211580 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5844d741e22ff09ae2c803a73a957b04b60d9c6d7c529313cb014e7d6aa2cd2b"
I0329 17:10:43.263859 215882 logs.go:123] Gathering logs for kindnet [08cd64e5efe8ff52775b40477442e26a8cdba2671752a9c947ffb54af011e505] ...
I0329 17:10:43.263893 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08cd64e5efe8ff52775b40477442e26a8cdba2671752a9c947ffb54af011e505"
I0329 17:10:43.305744 215882 logs.go:123] Gathering logs for containerd ...
I0329 17:10:43.305874 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0329 17:10:43.366077 215882 logs.go:123] Gathering logs for dmesg ...
I0329 17:10:43.366119 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0329 17:10:43.383709 215882 logs.go:123] Gathering logs for describe nodes ...
I0329 17:10:43.383742 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0329 17:10:43.571852 215882 logs.go:123] Gathering logs for coredns [a5dfee4a506c778834f309546822471bfe29cb70606442c7cda067bc889ec4e8] ...
I0329 17:10:43.571881 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5dfee4a506c778834f309546822471bfe29cb70606442c7cda067bc889ec4e8"
I0329 17:10:43.616855 215882 logs.go:123] Gathering logs for kube-proxy [476f8a391def914e7a67fa8cc7c10883e946e9e4bdd7cef65ed70f98df3ef191] ...
I0329 17:10:43.616885 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476f8a391def914e7a67fa8cc7c10883e946e9e4bdd7cef65ed70f98df3ef191"
I0329 17:10:43.655347 215882 logs.go:123] Gathering logs for kube-proxy [2bb6df8707154298dfc0cb21f5c505ece8764779eb903346bffb88181622549c] ...
I0329 17:10:43.655430 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bb6df8707154298dfc0cb21f5c505ece8764779eb903346bffb88181622549c"
I0329 17:10:43.693888 215882 logs.go:123] Gathering logs for kube-controller-manager [476bef2c0ac3db5deaf24b4ec3339f3ab38a00eace7a38e235325e571ed11ea1] ...
I0329 17:10:43.693913 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476bef2c0ac3db5deaf24b4ec3339f3ab38a00eace7a38e235325e571ed11ea1"
I0329 17:10:43.754901 215882 logs.go:123] Gathering logs for kube-controller-manager [fdf64c9da80b16dd615c8a65bf20e7dbdac57ddd63ab9ea71557f869e4214e70] ...
I0329 17:10:43.754935 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdf64c9da80b16dd615c8a65bf20e7dbdac57ddd63ab9ea71557f869e4214e70"
I0329 17:10:43.822348 215882 logs.go:123] Gathering logs for storage-provisioner [c332b5510a0e67142842621140541b4ab72255b20ae34d6867fef0ea4307b24b] ...
I0329 17:10:43.822385 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c332b5510a0e67142842621140541b4ab72255b20ae34d6867fef0ea4307b24b"
I0329 17:10:43.870460 215882 logs.go:123] Gathering logs for kindnet [d73c4b171565d73ace6405634046441c894c4d53c7ca6b54394fc1f03f94bf95] ...
I0329 17:10:43.870486 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d73c4b171565d73ace6405634046441c894c4d53c7ca6b54394fc1f03f94bf95"
I0329 17:10:43.916547 215882 logs.go:123] Gathering logs for kubernetes-dashboard [9affe02cbdfd87007f6bb996c748c3449799a41720b75c90d252c62fcb927af2] ...
I0329 17:10:43.916573 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9affe02cbdfd87007f6bb996c748c3449799a41720b75c90d252c62fcb927af2"
I0329 17:10:43.955600 215882 logs.go:123] Gathering logs for kube-apiserver [e4211e5b58844aac9df57b0f72dd7ef968a74d18917ec3d0dc6bca362a5d010f] ...
I0329 17:10:43.955629 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e4211e5b58844aac9df57b0f72dd7ef968a74d18917ec3d0dc6bca362a5d010f"
I0329 17:10:44.024654 215882 logs.go:123] Gathering logs for kube-scheduler [bfdb0b4297e36c3796a7a426259c1feba5b7a1a067614b3d35556d6bdbbc76ee] ...
I0329 17:10:44.024693 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfdb0b4297e36c3796a7a426259c1feba5b7a1a067614b3d35556d6bdbbc76ee"
I0329 17:10:44.067180 215882 logs.go:123] Gathering logs for storage-provisioner [7170455291cf3cb3b0f76ac6cd41db4b1dae2328482597589a839fe7a7e8e9a2] ...
I0329 17:10:44.067207 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7170455291cf3cb3b0f76ac6cd41db4b1dae2328482597589a839fe7a7e8e9a2"
I0329 17:10:44.120816 215882 logs.go:123] Gathering logs for container status ...
I0329 17:10:44.120843 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0329 17:10:44.202658 215882 out.go:358] Setting ErrFile to fd 2...
I0329 17:10:44.202687 215882 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0329 17:10:44.202740 215882 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W0329 17:10:44.202758 215882 out.go:270] Mar 29 17:10:16 old-k8s-version-469910 kubelet[662]: E0329 17:10:16.924947 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
Mar 29 17:10:16 old-k8s-version-469910 kubelet[662]: E0329 17:10:16.924947 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:44.202764 215882 out.go:270] Mar 29 17:10:27 old-k8s-version-469910 kubelet[662]: E0329 17:10:27.924876 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
Mar 29 17:10:27 old-k8s-version-469910 kubelet[662]: E0329 17:10:27.924876 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:44.202773 215882 out.go:270] Mar 29 17:10:28 old-k8s-version-469910 kubelet[662]: E0329 17:10:28.924927 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 29 17:10:28 old-k8s-version-469910 kubelet[662]: E0329 17:10:28.924927 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:44.202786 215882 out.go:270] Mar 29 17:10:39 old-k8s-version-469910 kubelet[662]: E0329 17:10:39.925004 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
Mar 29 17:10:39 old-k8s-version-469910 kubelet[662]: E0329 17:10:39.925004 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:44.202792 215882 out.go:270] Mar 29 17:10:40 old-k8s-version-469910 kubelet[662]: E0329 17:10:40.928277 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 29 17:10:40 old-k8s-version-469910 kubelet[662]: E0329 17:10:40.928277 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0329 17:10:44.202800 215882 out.go:358] Setting ErrFile to fd 2...
I0329 17:10:44.202805 215882 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0329 17:10:54.203609 215882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0329 17:10:54.215809 215882 api_server.go:72] duration metric: took 5m49.145890797s to wait for apiserver process to appear ...
I0329 17:10:54.215833 215882 api_server.go:88] waiting for apiserver healthz status ...
I0329 17:10:54.215870 215882 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0329 17:10:54.215942 215882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0329 17:10:54.252512 215882 cri.go:89] found id: "1d665425da32470bd2630153351ec20d276fad049da85eaa1032a6def1a1deff"
I0329 17:10:54.252534 215882 cri.go:89] found id: "e4211e5b58844aac9df57b0f72dd7ef968a74d18917ec3d0dc6bca362a5d010f"
I0329 17:10:54.252540 215882 cri.go:89] found id: ""
I0329 17:10:54.252547 215882 logs.go:282] 2 containers: [1d665425da32470bd2630153351ec20d276fad049da85eaa1032a6def1a1deff e4211e5b58844aac9df57b0f72dd7ef968a74d18917ec3d0dc6bca362a5d010f]
I0329 17:10:54.252607 215882 ssh_runner.go:195] Run: which crictl
I0329 17:10:54.256334 215882 ssh_runner.go:195] Run: which crictl
I0329 17:10:54.260357 215882 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0329 17:10:54.260448 215882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0329 17:10:54.299102 215882 cri.go:89] found id: "e8ee39792992c3690c7a1594f566f69559236cf0a1ffa535c6ae2e183727988d"
I0329 17:10:54.299127 215882 cri.go:89] found id: "45b6c3befe403c57c64897456aea1d1f627af2619be6ae801ddef2b592be0f0b"
I0329 17:10:54.299132 215882 cri.go:89] found id: ""
I0329 17:10:54.299139 215882 logs.go:282] 2 containers: [e8ee39792992c3690c7a1594f566f69559236cf0a1ffa535c6ae2e183727988d 45b6c3befe403c57c64897456aea1d1f627af2619be6ae801ddef2b592be0f0b]
I0329 17:10:54.299195 215882 ssh_runner.go:195] Run: which crictl
I0329 17:10:54.303131 215882 ssh_runner.go:195] Run: which crictl
I0329 17:10:54.306531 215882 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0329 17:10:54.306608 215882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0329 17:10:54.343639 215882 cri.go:89] found id: "b1d8c9bdcf51275b8ee260ff27008e50a288380eee492e2ea65a17eecd633a05"
I0329 17:10:54.343659 215882 cri.go:89] found id: "a5dfee4a506c778834f309546822471bfe29cb70606442c7cda067bc889ec4e8"
I0329 17:10:54.343664 215882 cri.go:89] found id: ""
I0329 17:10:54.343671 215882 logs.go:282] 2 containers: [b1d8c9bdcf51275b8ee260ff27008e50a288380eee492e2ea65a17eecd633a05 a5dfee4a506c778834f309546822471bfe29cb70606442c7cda067bc889ec4e8]
I0329 17:10:54.343732 215882 ssh_runner.go:195] Run: which crictl
I0329 17:10:54.347177 215882 ssh_runner.go:195] Run: which crictl
I0329 17:10:54.352501 215882 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0329 17:10:54.352579 215882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0329 17:10:54.390869 215882 cri.go:89] found id: "bfdb0b4297e36c3796a7a426259c1feba5b7a1a067614b3d35556d6bdbbc76ee"
I0329 17:10:54.390900 215882 cri.go:89] found id: "5844d741e22ff09ae2c803a73a957b04b60d9c6d7c529313cb014e7d6aa2cd2b"
I0329 17:10:54.390905 215882 cri.go:89] found id: ""
I0329 17:10:54.390912 215882 logs.go:282] 2 containers: [bfdb0b4297e36c3796a7a426259c1feba5b7a1a067614b3d35556d6bdbbc76ee 5844d741e22ff09ae2c803a73a957b04b60d9c6d7c529313cb014e7d6aa2cd2b]
I0329 17:10:54.390969 215882 ssh_runner.go:195] Run: which crictl
I0329 17:10:54.394681 215882 ssh_runner.go:195] Run: which crictl
I0329 17:10:54.398070 215882 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0329 17:10:54.398139 215882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0329 17:10:54.441772 215882 cri.go:89] found id: "476f8a391def914e7a67fa8cc7c10883e946e9e4bdd7cef65ed70f98df3ef191"
I0329 17:10:54.441843 215882 cri.go:89] found id: "2bb6df8707154298dfc0cb21f5c505ece8764779eb903346bffb88181622549c"
I0329 17:10:54.441862 215882 cri.go:89] found id: ""
I0329 17:10:54.441884 215882 logs.go:282] 2 containers: [476f8a391def914e7a67fa8cc7c10883e946e9e4bdd7cef65ed70f98df3ef191 2bb6df8707154298dfc0cb21f5c505ece8764779eb903346bffb88181622549c]
I0329 17:10:54.441965 215882 ssh_runner.go:195] Run: which crictl
I0329 17:10:54.446220 215882 ssh_runner.go:195] Run: which crictl
I0329 17:10:54.450091 215882 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0329 17:10:54.450213 215882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0329 17:10:54.487544 215882 cri.go:89] found id: "476bef2c0ac3db5deaf24b4ec3339f3ab38a00eace7a38e235325e571ed11ea1"
I0329 17:10:54.487565 215882 cri.go:89] found id: "fdf64c9da80b16dd615c8a65bf20e7dbdac57ddd63ab9ea71557f869e4214e70"
I0329 17:10:54.487570 215882 cri.go:89] found id: ""
I0329 17:10:54.487577 215882 logs.go:282] 2 containers: [476bef2c0ac3db5deaf24b4ec3339f3ab38a00eace7a38e235325e571ed11ea1 fdf64c9da80b16dd615c8a65bf20e7dbdac57ddd63ab9ea71557f869e4214e70]
I0329 17:10:54.487634 215882 ssh_runner.go:195] Run: which crictl
I0329 17:10:54.491066 215882 ssh_runner.go:195] Run: which crictl
I0329 17:10:54.494517 215882 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0329 17:10:54.494609 215882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0329 17:10:54.541722 215882 cri.go:89] found id: "08cd64e5efe8ff52775b40477442e26a8cdba2671752a9c947ffb54af011e505"
I0329 17:10:54.541749 215882 cri.go:89] found id: "d73c4b171565d73ace6405634046441c894c4d53c7ca6b54394fc1f03f94bf95"
I0329 17:10:54.541755 215882 cri.go:89] found id: ""
I0329 17:10:54.541762 215882 logs.go:282] 2 containers: [08cd64e5efe8ff52775b40477442e26a8cdba2671752a9c947ffb54af011e505 d73c4b171565d73ace6405634046441c894c4d53c7ca6b54394fc1f03f94bf95]
I0329 17:10:54.541827 215882 ssh_runner.go:195] Run: which crictl
I0329 17:10:54.546000 215882 ssh_runner.go:195] Run: which crictl
I0329 17:10:54.549645 215882 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0329 17:10:54.549721 215882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0329 17:10:54.586747 215882 cri.go:89] found id: "9affe02cbdfd87007f6bb996c748c3449799a41720b75c90d252c62fcb927af2"
I0329 17:10:54.586769 215882 cri.go:89] found id: ""
I0329 17:10:54.586777 215882 logs.go:282] 1 containers: [9affe02cbdfd87007f6bb996c748c3449799a41720b75c90d252c62fcb927af2]
I0329 17:10:54.586862 215882 ssh_runner.go:195] Run: which crictl
I0329 17:10:54.590372 215882 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0329 17:10:54.590468 215882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0329 17:10:54.633594 215882 cri.go:89] found id: "7170455291cf3cb3b0f76ac6cd41db4b1dae2328482597589a839fe7a7e8e9a2"
I0329 17:10:54.633618 215882 cri.go:89] found id: "c332b5510a0e67142842621140541b4ab72255b20ae34d6867fef0ea4307b24b"
I0329 17:10:54.633629 215882 cri.go:89] found id: ""
I0329 17:10:54.633637 215882 logs.go:282] 2 containers: [7170455291cf3cb3b0f76ac6cd41db4b1dae2328482597589a839fe7a7e8e9a2 c332b5510a0e67142842621140541b4ab72255b20ae34d6867fef0ea4307b24b]
I0329 17:10:54.633693 215882 ssh_runner.go:195] Run: which crictl
I0329 17:10:54.637395 215882 ssh_runner.go:195] Run: which crictl
I0329 17:10:54.640645 215882 logs.go:123] Gathering logs for container status ...
I0329 17:10:54.640673 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0329 17:10:54.687791 215882 logs.go:123] Gathering logs for kube-apiserver [1d665425da32470bd2630153351ec20d276fad049da85eaa1032a6def1a1deff] ...
I0329 17:10:54.687822 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d665425da32470bd2630153351ec20d276fad049da85eaa1032a6def1a1deff"
I0329 17:10:54.771349 215882 logs.go:123] Gathering logs for kube-scheduler [5844d741e22ff09ae2c803a73a957b04b60d9c6d7c529313cb014e7d6aa2cd2b] ...
I0329 17:10:54.771408 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5844d741e22ff09ae2c803a73a957b04b60d9c6d7c529313cb014e7d6aa2cd2b"
I0329 17:10:54.814306 215882 logs.go:123] Gathering logs for kube-proxy [476f8a391def914e7a67fa8cc7c10883e946e9e4bdd7cef65ed70f98df3ef191] ...
I0329 17:10:54.814338 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476f8a391def914e7a67fa8cc7c10883e946e9e4bdd7cef65ed70f98df3ef191"
I0329 17:10:54.852482 215882 logs.go:123] Gathering logs for kube-proxy [2bb6df8707154298dfc0cb21f5c505ece8764779eb903346bffb88181622549c] ...
I0329 17:10:54.852513 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bb6df8707154298dfc0cb21f5c505ece8764779eb903346bffb88181622549c"
I0329 17:10:54.892560 215882 logs.go:123] Gathering logs for kube-controller-manager [476bef2c0ac3db5deaf24b4ec3339f3ab38a00eace7a38e235325e571ed11ea1] ...
I0329 17:10:54.892587 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476bef2c0ac3db5deaf24b4ec3339f3ab38a00eace7a38e235325e571ed11ea1"
I0329 17:10:54.952387 215882 logs.go:123] Gathering logs for containerd ...
I0329 17:10:54.952422 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0329 17:10:55.009519 215882 logs.go:123] Gathering logs for describe nodes ...
I0329 17:10:55.009553 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0329 17:10:55.154021 215882 logs.go:123] Gathering logs for kube-apiserver [e4211e5b58844aac9df57b0f72dd7ef968a74d18917ec3d0dc6bca362a5d010f] ...
I0329 17:10:55.154055 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e4211e5b58844aac9df57b0f72dd7ef968a74d18917ec3d0dc6bca362a5d010f"
I0329 17:10:55.208958 215882 logs.go:123] Gathering logs for etcd [45b6c3befe403c57c64897456aea1d1f627af2619be6ae801ddef2b592be0f0b] ...
I0329 17:10:55.208993 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 45b6c3befe403c57c64897456aea1d1f627af2619be6ae801ddef2b592be0f0b"
I0329 17:10:55.256203 215882 logs.go:123] Gathering logs for kube-controller-manager [fdf64c9da80b16dd615c8a65bf20e7dbdac57ddd63ab9ea71557f869e4214e70] ...
I0329 17:10:55.256234 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdf64c9da80b16dd615c8a65bf20e7dbdac57ddd63ab9ea71557f869e4214e70"
I0329 17:10:55.350168 215882 logs.go:123] Gathering logs for kindnet [08cd64e5efe8ff52775b40477442e26a8cdba2671752a9c947ffb54af011e505] ...
I0329 17:10:55.350203 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08cd64e5efe8ff52775b40477442e26a8cdba2671752a9c947ffb54af011e505"
I0329 17:10:55.398065 215882 logs.go:123] Gathering logs for kindnet [d73c4b171565d73ace6405634046441c894c4d53c7ca6b54394fc1f03f94bf95] ...
I0329 17:10:55.398094 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d73c4b171565d73ace6405634046441c894c4d53c7ca6b54394fc1f03f94bf95"
I0329 17:10:55.438437 215882 logs.go:123] Gathering logs for kubernetes-dashboard [9affe02cbdfd87007f6bb996c748c3449799a41720b75c90d252c62fcb927af2] ...
I0329 17:10:55.438466 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9affe02cbdfd87007f6bb996c748c3449799a41720b75c90d252c62fcb927af2"
I0329 17:10:55.479730 215882 logs.go:123] Gathering logs for storage-provisioner [7170455291cf3cb3b0f76ac6cd41db4b1dae2328482597589a839fe7a7e8e9a2] ...
I0329 17:10:55.479762 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7170455291cf3cb3b0f76ac6cd41db4b1dae2328482597589a839fe7a7e8e9a2"
I0329 17:10:55.519820 215882 logs.go:123] Gathering logs for kubelet ...
I0329 17:10:55.519850 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0329 17:10:55.575492 215882 logs.go:138] Found kubelet problem: Mar 29 17:05:23 old-k8s-version-469910 kubelet[662]: E0329 17:05:23.715259 662 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-469910" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-469910' and this object
W0329 17:10:55.575726 215882 logs.go:138] Found kubelet problem: Mar 29 17:05:23 old-k8s-version-469910 kubelet[662]: E0329 17:05:23.715674 662 reflector.go:138] object-"kube-system"/"kube-proxy-token-pmvfp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-pmvfp" is forbidden: User "system:node:old-k8s-version-469910" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-469910' and this object
W0329 17:10:55.575956 215882 logs.go:138] Found kubelet problem: Mar 29 17:05:23 old-k8s-version-469910 kubelet[662]: E0329 17:05:23.767001 662 reflector.go:138] object-"kube-system"/"storage-provisioner-token-rwmwb": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-rwmwb" is forbidden: User "system:node:old-k8s-version-469910" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-469910' and this object
W0329 17:10:55.576171 215882 logs.go:138] Found kubelet problem: Mar 29 17:05:23 old-k8s-version-469910 kubelet[662]: E0329 17:05:23.767092 662 reflector.go:138] object-"kube-system"/"kindnet-token-m4t4x": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-m4t4x" is forbidden: User "system:node:old-k8s-version-469910" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-469910' and this object
W0329 17:10:55.576379 215882 logs.go:138] Found kubelet problem: Mar 29 17:05:23 old-k8s-version-469910 kubelet[662]: E0329 17:05:23.767148 662 reflector.go:138] object-"default"/"default-token-xbb7l": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-xbb7l" is forbidden: User "system:node:old-k8s-version-469910" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-469910' and this object
W0329 17:10:55.576581 215882 logs.go:138] Found kubelet problem: Mar 29 17:05:23 old-k8s-version-469910 kubelet[662]: E0329 17:05:23.767202 662 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-469910" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-469910' and this object
W0329 17:10:55.576795 215882 logs.go:138] Found kubelet problem: Mar 29 17:05:23 old-k8s-version-469910 kubelet[662]: E0329 17:05:23.767251 662 reflector.go:138] object-"kube-system"/"coredns-token-b8sn7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-b8sn7" is forbidden: User "system:node:old-k8s-version-469910" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-469910' and this object
W0329 17:10:55.577014 215882 logs.go:138] Found kubelet problem: Mar 29 17:05:23 old-k8s-version-469910 kubelet[662]: E0329 17:05:23.767297 662 reflector.go:138] object-"kube-system"/"metrics-server-token-fmlf6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-fmlf6" is forbidden: User "system:node:old-k8s-version-469910" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-469910' and this object
W0329 17:10:55.584672 215882 logs.go:138] Found kubelet problem: Mar 29 17:05:27 old-k8s-version-469910 kubelet[662]: E0329 17:05:27.314655 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0329 17:10:55.584865 215882 logs.go:138] Found kubelet problem: Mar 29 17:05:28 old-k8s-version-469910 kubelet[662]: E0329 17:05:28.220496 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:55.587659 215882 logs.go:138] Found kubelet problem: Mar 29 17:05:39 old-k8s-version-469910 kubelet[662]: E0329 17:05:39.951360 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0329 17:10:55.589600 215882 logs.go:138] Found kubelet problem: Mar 29 17:05:52 old-k8s-version-469910 kubelet[662]: E0329 17:05:52.313089 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:55.590393 215882 logs.go:138] Found kubelet problem: Mar 29 17:05:53 old-k8s-version-469910 kubelet[662]: E0329 17:05:53.327259 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:55.590578 215882 logs.go:138] Found kubelet problem: Mar 29 17:05:54 old-k8s-version-469910 kubelet[662]: E0329 17:05:54.925664 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:55.590904 215882 logs.go:138] Found kubelet problem: Mar 29 17:05:57 old-k8s-version-469910 kubelet[662]: E0329 17:05:57.710953 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:55.591356 215882 logs.go:138] Found kubelet problem: Mar 29 17:05:58 old-k8s-version-469910 kubelet[662]: E0329 17:05:58.342508 662 pod_workers.go:191] Error syncing pod 59a53811-f6b6-411a-a123-792d40062106 ("storage-provisioner_kube-system(59a53811-f6b6-411a-a123-792d40062106)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(59a53811-f6b6-411a-a123-792d40062106)"
W0329 17:10:55.592286 215882 logs.go:138] Found kubelet problem: Mar 29 17:06:08 old-k8s-version-469910 kubelet[662]: E0329 17:06:08.376633 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:55.594769 215882 logs.go:138] Found kubelet problem: Mar 29 17:06:08 old-k8s-version-469910 kubelet[662]: E0329 17:06:08.946979 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0329 17:10:55.595228 215882 logs.go:138] Found kubelet problem: Mar 29 17:06:17 old-k8s-version-469910 kubelet[662]: E0329 17:06:17.711547 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:55.595420 215882 logs.go:138] Found kubelet problem: Mar 29 17:06:21 old-k8s-version-469910 kubelet[662]: E0329 17:06:21.924650 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:55.596010 215882 logs.go:138] Found kubelet problem: Mar 29 17:06:31 old-k8s-version-469910 kubelet[662]: E0329 17:06:31.441423 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:55.596196 215882 logs.go:138] Found kubelet problem: Mar 29 17:06:33 old-k8s-version-469910 kubelet[662]: E0329 17:06:33.924999 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:55.596523 215882 logs.go:138] Found kubelet problem: Mar 29 17:06:37 old-k8s-version-469910 kubelet[662]: E0329 17:06:37.711494 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:55.596707 215882 logs.go:138] Found kubelet problem: Mar 29 17:06:46 old-k8s-version-469910 kubelet[662]: E0329 17:06:46.924781 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:55.597032 215882 logs.go:138] Found kubelet problem: Mar 29 17:06:50 old-k8s-version-469910 kubelet[662]: E0329 17:06:50.924485 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:55.599534 215882 logs.go:138] Found kubelet problem: Mar 29 17:06:59 old-k8s-version-469910 kubelet[662]: E0329 17:06:59.935763 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0329 17:10:55.599867 215882 logs.go:138] Found kubelet problem: Mar 29 17:07:05 old-k8s-version-469910 kubelet[662]: E0329 17:07:05.924374 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:55.600053 215882 logs.go:138] Found kubelet problem: Mar 29 17:07:13 old-k8s-version-469910 kubelet[662]: E0329 17:07:13.924843 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:55.600641 215882 logs.go:138] Found kubelet problem: Mar 29 17:07:17 old-k8s-version-469910 kubelet[662]: E0329 17:07:17.575307 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:55.600966 215882 logs.go:138] Found kubelet problem: Mar 29 17:07:18 old-k8s-version-469910 kubelet[662]: E0329 17:07:18.580848 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:55.601149 215882 logs.go:138] Found kubelet problem: Mar 29 17:07:24 old-k8s-version-469910 kubelet[662]: E0329 17:07:24.925331 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:55.601475 215882 logs.go:138] Found kubelet problem: Mar 29 17:07:30 old-k8s-version-469910 kubelet[662]: E0329 17:07:30.924368 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:55.601658 215882 logs.go:138] Found kubelet problem: Mar 29 17:07:37 old-k8s-version-469910 kubelet[662]: E0329 17:07:37.924941 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:55.602011 215882 logs.go:138] Found kubelet problem: Mar 29 17:07:41 old-k8s-version-469910 kubelet[662]: E0329 17:07:41.924437 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:55.602202 215882 logs.go:138] Found kubelet problem: Mar 29 17:07:50 old-k8s-version-469910 kubelet[662]: E0329 17:07:50.926604 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:55.602527 215882 logs.go:138] Found kubelet problem: Mar 29 17:07:55 old-k8s-version-469910 kubelet[662]: E0329 17:07:55.924374 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:55.602710 215882 logs.go:138] Found kubelet problem: Mar 29 17:08:03 old-k8s-version-469910 kubelet[662]: E0329 17:08:03.925408 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:55.603036 215882 logs.go:138] Found kubelet problem: Mar 29 17:08:09 old-k8s-version-469910 kubelet[662]: E0329 17:08:09.924355 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:55.603220 215882 logs.go:138] Found kubelet problem: Mar 29 17:08:14 old-k8s-version-469910 kubelet[662]: E0329 17:08:14.924851 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:55.603565 215882 logs.go:138] Found kubelet problem: Mar 29 17:08:23 old-k8s-version-469910 kubelet[662]: E0329 17:08:23.924360 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:55.606001 215882 logs.go:138] Found kubelet problem: Mar 29 17:08:25 old-k8s-version-469910 kubelet[662]: E0329 17:08:25.935664 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0329 17:10:55.606332 215882 logs.go:138] Found kubelet problem: Mar 29 17:08:36 old-k8s-version-469910 kubelet[662]: E0329 17:08:36.924770 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:55.606516 215882 logs.go:138] Found kubelet problem: Mar 29 17:08:37 old-k8s-version-469910 kubelet[662]: E0329 17:08:37.924679 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:55.607100 215882 logs.go:138] Found kubelet problem: Mar 29 17:08:49 old-k8s-version-469910 kubelet[662]: E0329 17:08:49.806140 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:55.607284 215882 logs.go:138] Found kubelet problem: Mar 29 17:08:49 old-k8s-version-469910 kubelet[662]: E0329 17:08:49.924719 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:55.607615 215882 logs.go:138] Found kubelet problem: Mar 29 17:08:57 old-k8s-version-469910 kubelet[662]: E0329 17:08:57.711479 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:55.607798 215882 logs.go:138] Found kubelet problem: Mar 29 17:09:02 old-k8s-version-469910 kubelet[662]: E0329 17:09:02.927605 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:55.608123 215882 logs.go:138] Found kubelet problem: Mar 29 17:09:09 old-k8s-version-469910 kubelet[662]: E0329 17:09:09.925706 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:55.608306 215882 logs.go:138] Found kubelet problem: Mar 29 17:09:14 old-k8s-version-469910 kubelet[662]: E0329 17:09:14.926280 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:55.608633 215882 logs.go:138] Found kubelet problem: Mar 29 17:09:22 old-k8s-version-469910 kubelet[662]: E0329 17:09:22.924663 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:55.608815 215882 logs.go:138] Found kubelet problem: Mar 29 17:09:27 old-k8s-version-469910 kubelet[662]: E0329 17:09:27.924793 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:55.609140 215882 logs.go:138] Found kubelet problem: Mar 29 17:09:37 old-k8s-version-469910 kubelet[662]: E0329 17:09:37.924460 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:55.609322 215882 logs.go:138] Found kubelet problem: Mar 29 17:09:38 old-k8s-version-469910 kubelet[662]: E0329 17:09:38.926782 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:55.609648 215882 logs.go:138] Found kubelet problem: Mar 29 17:09:48 old-k8s-version-469910 kubelet[662]: E0329 17:09:48.928853 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:55.609831 215882 logs.go:138] Found kubelet problem: Mar 29 17:09:49 old-k8s-version-469910 kubelet[662]: E0329 17:09:49.925066 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:55.610160 215882 logs.go:138] Found kubelet problem: Mar 29 17:10:02 old-k8s-version-469910 kubelet[662]: E0329 17:10:02.926703 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:55.610342 215882 logs.go:138] Found kubelet problem: Mar 29 17:10:02 old-k8s-version-469910 kubelet[662]: E0329 17:10:02.928431 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:55.610536 215882 logs.go:138] Found kubelet problem: Mar 29 17:10:15 old-k8s-version-469910 kubelet[662]: E0329 17:10:15.925755 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:55.610861 215882 logs.go:138] Found kubelet problem: Mar 29 17:10:16 old-k8s-version-469910 kubelet[662]: E0329 17:10:16.924947 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:55.611188 215882 logs.go:138] Found kubelet problem: Mar 29 17:10:27 old-k8s-version-469910 kubelet[662]: E0329 17:10:27.924876 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:55.611378 215882 logs.go:138] Found kubelet problem: Mar 29 17:10:28 old-k8s-version-469910 kubelet[662]: E0329 17:10:28.924927 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:55.611703 215882 logs.go:138] Found kubelet problem: Mar 29 17:10:39 old-k8s-version-469910 kubelet[662]: E0329 17:10:39.925004 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:55.611886 215882 logs.go:138] Found kubelet problem: Mar 29 17:10:40 old-k8s-version-469910 kubelet[662]: E0329 17:10:40.928277 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:55.612211 215882 logs.go:138] Found kubelet problem: Mar 29 17:10:53 old-k8s-version-469910 kubelet[662]: E0329 17:10:53.924385 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
I0329 17:10:55.612223 215882 logs.go:123] Gathering logs for coredns [b1d8c9bdcf51275b8ee260ff27008e50a288380eee492e2ea65a17eecd633a05] ...
I0329 17:10:55.612239 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1d8c9bdcf51275b8ee260ff27008e50a288380eee492e2ea65a17eecd633a05"
I0329 17:10:55.652423 215882 logs.go:123] Gathering logs for kube-scheduler [bfdb0b4297e36c3796a7a426259c1feba5b7a1a067614b3d35556d6bdbbc76ee] ...
I0329 17:10:55.652455 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfdb0b4297e36c3796a7a426259c1feba5b7a1a067614b3d35556d6bdbbc76ee"
I0329 17:10:55.693896 215882 logs.go:123] Gathering logs for storage-provisioner [c332b5510a0e67142842621140541b4ab72255b20ae34d6867fef0ea4307b24b] ...
I0329 17:10:55.693924 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c332b5510a0e67142842621140541b4ab72255b20ae34d6867fef0ea4307b24b"
I0329 17:10:55.730616 215882 logs.go:123] Gathering logs for dmesg ...
I0329 17:10:55.730645 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0329 17:10:55.745908 215882 logs.go:123] Gathering logs for etcd [e8ee39792992c3690c7a1594f566f69559236cf0a1ffa535c6ae2e183727988d] ...
I0329 17:10:55.745935 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8ee39792992c3690c7a1594f566f69559236cf0a1ffa535c6ae2e183727988d"
I0329 17:10:55.786543 215882 logs.go:123] Gathering logs for coredns [a5dfee4a506c778834f309546822471bfe29cb70606442c7cda067bc889ec4e8] ...
I0329 17:10:55.786572 215882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5dfee4a506c778834f309546822471bfe29cb70606442c7cda067bc889ec4e8"
I0329 17:10:55.821550 215882 out.go:358] Setting ErrFile to fd 2...
I0329 17:10:55.821575 215882 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0329 17:10:55.821649 215882 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W0329 17:10:55.821664 215882 out.go:270] Mar 29 17:10:27 old-k8s-version-469910 kubelet[662]: E0329 17:10:27.924876 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
Mar 29 17:10:27 old-k8s-version-469910 kubelet[662]: E0329 17:10:27.924876 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:55.821692 215882 out.go:270] Mar 29 17:10:28 old-k8s-version-469910 kubelet[662]: E0329 17:10:28.924927 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 29 17:10:28 old-k8s-version-469910 kubelet[662]: E0329 17:10:28.924927 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:55.821707 215882 out.go:270] Mar 29 17:10:39 old-k8s-version-469910 kubelet[662]: E0329 17:10:39.925004 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
Mar 29 17:10:39 old-k8s-version-469910 kubelet[662]: E0329 17:10:39.925004 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
W0329 17:10:55.821731 215882 out.go:270] Mar 29 17:10:40 old-k8s-version-469910 kubelet[662]: E0329 17:10:40.928277 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 29 17:10:40 old-k8s-version-469910 kubelet[662]: E0329 17:10:40.928277 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:10:55.821741 215882 out.go:270] Mar 29 17:10:53 old-k8s-version-469910 kubelet[662]: E0329 17:10:53.924385 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
Mar 29 17:10:53 old-k8s-version-469910 kubelet[662]: E0329 17:10:53.924385 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
I0329 17:10:55.821752 215882 out.go:358] Setting ErrFile to fd 2...
I0329 17:10:55.821759 215882 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0329 17:11:05.822920 215882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0329 17:11:05.834495 215882 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I0329 17:11:05.838187 215882 out.go:201]
W0329 17:11:05.841303 215882 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0329 17:11:05.841442 215882 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
* Suggestion: Control Plane could not update, try minikube delete --all --purge
W0329 17:11:05.841477 215882 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
* Related issue: https://github.com/kubernetes/minikube/issues/11417
W0329 17:11:05.841491 215882 out.go:270] *
*
W0329 17:11:05.842469 215882 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0329 17:11:05.844500 215882 out.go:201]
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-469910 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect old-k8s-version-469910
helpers_test.go:235: (dbg) docker inspect old-k8s-version-469910:
-- stdout --
[
{
"Id": "65755c4c206c0db5f8234b5a122d4ea2aec359fa10bfa87acd25f84de3270b04",
"Created": "2025-03-29T17:01:46.696556012Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 216235,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-03-29T17:04:57.014610369Z",
"FinishedAt": "2025-03-29T17:04:55.90994041Z"
},
"Image": "sha256:df0c2544fb3106b890f0a9ab81fcf49f97edb092b83e47f42288ad5dfe1f4b40",
"ResolvConfPath": "/var/lib/docker/containers/65755c4c206c0db5f8234b5a122d4ea2aec359fa10bfa87acd25f84de3270b04/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/65755c4c206c0db5f8234b5a122d4ea2aec359fa10bfa87acd25f84de3270b04/hostname",
"HostsPath": "/var/lib/docker/containers/65755c4c206c0db5f8234b5a122d4ea2aec359fa10bfa87acd25f84de3270b04/hosts",
"LogPath": "/var/lib/docker/containers/65755c4c206c0db5f8234b5a122d4ea2aec359fa10bfa87acd25f84de3270b04/65755c4c206c0db5f8234b5a122d4ea2aec359fa10bfa87acd25f84de3270b04-json.log",
"Name": "/old-k8s-version-469910",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"old-k8s-version-469910:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "old-k8s-version-469910",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "65755c4c206c0db5f8234b5a122d4ea2aec359fa10bfa87acd25f84de3270b04",
"LowerDir": "/var/lib/docker/overlay2/abd6bfe789a2a6e71caed22fac8f5261fbce809ac89ae64b1b40c859cd81bcf7-init/diff:/var/lib/docker/overlay2/0fa69d3592bbe5ed47c226385d40ec5047267751a04fb51a5b54441830d1f01b/diff",
"MergedDir": "/var/lib/docker/overlay2/abd6bfe789a2a6e71caed22fac8f5261fbce809ac89ae64b1b40c859cd81bcf7/merged",
"UpperDir": "/var/lib/docker/overlay2/abd6bfe789a2a6e71caed22fac8f5261fbce809ac89ae64b1b40c859cd81bcf7/diff",
"WorkDir": "/var/lib/docker/overlay2/abd6bfe789a2a6e71caed22fac8f5261fbce809ac89ae64b1b40c859cd81bcf7/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "old-k8s-version-469910",
"Source": "/var/lib/docker/volumes/old-k8s-version-469910/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "old-k8s-version-469910",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "old-k8s-version-469910",
"name.minikube.sigs.k8s.io": "old-k8s-version-469910",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "82ed23cb1933994bf2320200437244f9d388fe27d27c4a7dc5654678f3436a15",
"SandboxKey": "/var/run/docker/netns/82ed23cb1933",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33068"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33069"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33072"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33070"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33071"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"old-k8s-version-469910": {
"IPAMConfig": {
"IPv4Address": "192.168.76.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "d2:c4:15:da:38:5d",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "8ea85d710a49d60bd82b366b876b93833dfe4d346965bc1187e0206ed2cd8a3a",
"EndpointID": "4e2e2ea440ac5b9f35403abd4048b540bf5a8150795236c838d58885223f3a23",
"Gateway": "192.168.76.1",
"IPAddress": "192.168.76.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"old-k8s-version-469910",
"65755c4c206c"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-469910 -n old-k8s-version-469910
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p old-k8s-version-469910 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-469910 logs -n 25: (3.327517942s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
| start | -p cert-options-985109 | cert-options-985109 | jenkins | v1.35.0 | 29 Mar 25 17:01 UTC | 29 Mar 25 17:01 UTC |
| | --memory=2048 | | | | | |
| | --apiserver-ips=127.0.0.1 | | | | | |
| | --apiserver-ips=192.168.15.15 | | | | | |
| | --apiserver-names=localhost | | | | | |
| | --apiserver-names=www.google.com | | | | | |
| | --apiserver-port=8555 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | cert-options-985109 ssh | cert-options-985109 | jenkins | v1.35.0 | 29 Mar 25 17:01 UTC | 29 Mar 25 17:01 UTC |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-985109 -- sudo | cert-options-985109 | jenkins | v1.35.0 | 29 Mar 25 17:01 UTC | 29 Mar 25 17:01 UTC |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-985109 | cert-options-985109 | jenkins | v1.35.0 | 29 Mar 25 17:01 UTC | 29 Mar 25 17:01 UTC |
| start | -p old-k8s-version-469910 | old-k8s-version-469910 | jenkins | v1.35.0 | 29 Mar 25 17:01 UTC | 29 Mar 25 17:04 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| start | -p cert-expiration-538342 | cert-expiration-538342 | jenkins | v1.35.0 | 29 Mar 25 17:03 UTC | 29 Mar 25 17:03 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p cert-expiration-538342 | cert-expiration-538342 | jenkins | v1.35.0 | 29 Mar 25 17:03 UTC | 29 Mar 25 17:03 UTC |
| start | -p no-preload-368928 | no-preload-368928 | jenkins | v1.35.0 | 29 Mar 25 17:03 UTC | 29 Mar 25 17:04 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.2 | | | | | |
| addons | enable metrics-server -p no-preload-368928 | no-preload-368928 | jenkins | v1.35.0 | 29 Mar 25 17:04 UTC | 29 Mar 25 17:04 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| addons | enable metrics-server -p old-k8s-version-469910 | old-k8s-version-469910 | jenkins | v1.35.0 | 29 Mar 25 17:04 UTC | 29 Mar 25 17:04 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p no-preload-368928 | no-preload-368928 | jenkins | v1.35.0 | 29 Mar 25 17:04 UTC | 29 Mar 25 17:04 UTC |
| | --alsologtostderr -v=3 | | | | | |
| stop | -p old-k8s-version-469910 | old-k8s-version-469910 | jenkins | v1.35.0 | 29 Mar 25 17:04 UTC | 29 Mar 25 17:04 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p no-preload-368928 | no-preload-368928 | jenkins | v1.35.0 | 29 Mar 25 17:04 UTC | 29 Mar 25 17:04 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| addons | enable dashboard -p old-k8s-version-469910 | old-k8s-version-469910 | jenkins | v1.35.0 | 29 Mar 25 17:04 UTC | 29 Mar 25 17:04 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p old-k8s-version-469910 | old-k8s-version-469910 | jenkins | v1.35.0 | 29 Mar 25 17:04 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| image | no-preload-368928 image list | no-preload-368928 | jenkins | v1.35.0 | 29 Mar 25 17:09 UTC | 29 Mar 25 17:09 UTC |
| | --format=json | | | | | |
| pause | -p no-preload-368928 | no-preload-368928 | jenkins | v1.35.0 | 29 Mar 25 17:09 UTC | 29 Mar 25 17:09 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p no-preload-368928 | no-preload-368928 | jenkins | v1.35.0 | 29 Mar 25 17:09 UTC | 29 Mar 25 17:09 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p no-preload-368928 | no-preload-368928 | jenkins | v1.35.0 | 29 Mar 25 17:09 UTC | 29 Mar 25 17:09 UTC |
| delete | -p no-preload-368928 | no-preload-368928 | jenkins | v1.35.0 | 29 Mar 25 17:09 UTC | 29 Mar 25 17:09 UTC |
| start | -p embed-certs-728937 | embed-certs-728937 | jenkins | v1.35.0 | 29 Mar 25 17:09 UTC | 29 Mar 25 17:10 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --embed-certs --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.2 | | | | | |
| addons | enable metrics-server -p embed-certs-728937 | embed-certs-728937 | jenkins | v1.35.0 | 29 Mar 25 17:10 UTC | 29 Mar 25 17:10 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p embed-certs-728937 | embed-certs-728937 | jenkins | v1.35.0 | 29 Mar 25 17:10 UTC | 29 Mar 25 17:10 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p embed-certs-728937 | embed-certs-728937 | jenkins | v1.35.0 | 29 Mar 25 17:10 UTC | 29 Mar 25 17:10 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p embed-certs-728937 | embed-certs-728937 | jenkins | v1.35.0 | 29 Mar 25 17:10 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --embed-certs --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.2 | | | | | |
|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2025/03/29 17:10:57
Running on machine: ip-172-31-24-2
Binary: Built with gc go1.24.0 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0329 17:10:57.673711 228913 out.go:345] Setting OutFile to fd 1 ...
I0329 17:10:57.673953 228913 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0329 17:10:57.673987 228913 out.go:358] Setting ErrFile to fd 2...
I0329 17:10:57.674010 228913 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0329 17:10:57.674276 228913 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20470-2310/.minikube/bin
I0329 17:10:57.674683 228913 out.go:352] Setting JSON to false
I0329 17:10:57.675730 228913 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6808,"bootTime":1743261450,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1080-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
I0329 17:10:57.675830 228913 start.go:139] virtualization:
I0329 17:10:57.679255 228913 out.go:177] * [embed-certs-728937] minikube v1.35.0 on Ubuntu 20.04 (arm64)
I0329 17:10:57.683176 228913 out.go:177] - MINIKUBE_LOCATION=20470
I0329 17:10:57.683256 228913 notify.go:220] Checking for updates...
I0329 17:10:57.691129 228913 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0329 17:10:57.694008 228913 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20470-2310/kubeconfig
I0329 17:10:57.696908 228913 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20470-2310/.minikube
I0329 17:10:57.699799 228913 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0329 17:10:57.703106 228913 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0329 17:10:57.706564 228913 config.go:182] Loaded profile config "embed-certs-728937": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0329 17:10:57.707166 228913 driver.go:394] Setting default libvirt URI to qemu:///system
I0329 17:10:57.744382 228913 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
I0329 17:10:57.744503 228913 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0329 17:10:57.823211 228913 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-03-29 17:10:57.812578934 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1080-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:753481ec61c7c8955a23d6ff7bc8e4daed455734 Expected:753481ec61c7c8955a23d6ff7bc8e4daed455734} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0329 17:10:57.823324 228913 docker.go:318] overlay module found
I0329 17:10:57.826497 228913 out.go:177] * Using the docker driver based on existing profile
I0329 17:10:57.830825 228913 start.go:297] selected driver: docker
I0329 17:10:57.830845 228913 start.go:901] validating driver "docker" against &{Name:embed-certs-728937 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-728937 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Moun
tString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0329 17:10:57.830957 228913 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0329 17:10:57.831847 228913 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0329 17:10:57.912945 228913 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-03-29 17:10:57.903087355 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1080-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:753481ec61c7c8955a23d6ff7bc8e4daed455734 Expected:753481ec61c7c8955a23d6ff7bc8e4daed455734} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0329 17:10:57.913316 228913 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0329 17:10:57.913344 228913 cni.go:84] Creating CNI manager for ""
I0329 17:10:57.913400 228913 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0329 17:10:57.913437 228913 start.go:340] cluster config:
{Name:embed-certs-728937 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-728937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0329 17:10:57.916604 228913 out.go:177] * Starting "embed-certs-728937" primary control-plane node in "embed-certs-728937" cluster
I0329 17:10:57.919600 228913 cache.go:121] Beginning downloading kic base image for docker with containerd
I0329 17:10:57.922517 228913 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
I0329 17:10:57.925514 228913 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0329 17:10:57.925720 228913 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
I0329 17:10:57.926099 228913 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20470-2310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-arm64.tar.lz4
I0329 17:10:57.926113 228913 cache.go:56] Caching tarball of preloaded images
I0329 17:10:57.926195 228913 preload.go:172] Found /home/jenkins/minikube-integration/20470-2310/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0329 17:10:57.926205 228913 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
I0329 17:10:57.926316 228913 profile.go:143] Saving config to /home/jenkins/minikube-integration/20470-2310/.minikube/profiles/embed-certs-728937/config.json ...
I0329 17:10:57.957680 228913 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
I0329 17:10:57.957699 228913 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
I0329 17:10:57.957712 228913 cache.go:230] Successfully downloaded all kic artifacts
I0329 17:10:57.957735 228913 start.go:360] acquireMachinesLock for embed-certs-728937: {Name:mk7c7a2bc6a30b6b1a6788fddcb9ac734da05fca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0329 17:10:57.957787 228913 start.go:364] duration metric: took 34.979µs to acquireMachinesLock for "embed-certs-728937"
I0329 17:10:57.957812 228913 start.go:96] Skipping create...Using existing machine configuration
I0329 17:10:57.957818 228913 fix.go:54] fixHost starting:
I0329 17:10:57.958242 228913 cli_runner.go:164] Run: docker container inspect embed-certs-728937 --format={{.State.Status}}
I0329 17:10:57.987221 228913 fix.go:112] recreateIfNeeded on embed-certs-728937: state=Stopped err=<nil>
W0329 17:10:57.987250 228913 fix.go:138] unexpected machine state, will restart: <nil>
I0329 17:10:57.990561 228913 out.go:177] * Restarting existing docker container for "embed-certs-728937" ...
I0329 17:10:57.993308 228913 cli_runner.go:164] Run: docker start embed-certs-728937
I0329 17:10:58.284449 228913 cli_runner.go:164] Run: docker container inspect embed-certs-728937 --format={{.State.Status}}
I0329 17:10:58.309857 228913 kic.go:430] container "embed-certs-728937" state is running.
I0329 17:10:58.310459 228913 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-728937
I0329 17:10:58.333756 228913 profile.go:143] Saving config to /home/jenkins/minikube-integration/20470-2310/.minikube/profiles/embed-certs-728937/config.json ...
I0329 17:10:58.333999 228913 machine.go:93] provisionDockerMachine start ...
I0329 17:10:58.334173 228913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-728937
I0329 17:10:58.356694 228913 main.go:141] libmachine: Using SSH client type: native
I0329 17:10:58.358788 228913 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33078 <nil> <nil>}
I0329 17:10:58.358808 228913 main.go:141] libmachine: About to run SSH command:
hostname
I0329 17:10:58.360457 228913 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I0329 17:11:01.486961 228913 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-728937
I0329 17:11:01.486994 228913 ubuntu.go:169] provisioning hostname "embed-certs-728937"
I0329 17:11:01.487055 228913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-728937
I0329 17:11:01.510577 228913 main.go:141] libmachine: Using SSH client type: native
I0329 17:11:01.510900 228913 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33078 <nil> <nil>}
I0329 17:11:01.510919 228913 main.go:141] libmachine: About to run SSH command:
sudo hostname embed-certs-728937 && echo "embed-certs-728937" | sudo tee /etc/hostname
I0329 17:11:01.654371 228913 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-728937
I0329 17:11:01.654460 228913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-728937
I0329 17:11:01.673487 228913 main.go:141] libmachine: Using SSH client type: native
I0329 17:11:01.673812 228913 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33078 <nil> <nil>}
I0329 17:11:01.673836 228913 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sembed-certs-728937' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-728937/g' /etc/hosts;
else
echo '127.0.1.1 embed-certs-728937' | sudo tee -a /etc/hosts;
fi
fi
I0329 17:11:01.799810 228913 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0329 17:11:01.799839 228913 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20470-2310/.minikube CaCertPath:/home/jenkins/minikube-integration/20470-2310/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20470-2310/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20470-2310/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20470-2310/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20470-2310/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20470-2310/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20470-2310/.minikube}
I0329 17:11:01.799873 228913 ubuntu.go:177] setting up certificates
I0329 17:11:01.799883 228913 provision.go:84] configureAuth start
I0329 17:11:01.799949 228913 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-728937
I0329 17:11:01.820104 228913 provision.go:143] copyHostCerts
I0329 17:11:01.820178 228913 exec_runner.go:144] found /home/jenkins/minikube-integration/20470-2310/.minikube/ca.pem, removing ...
I0329 17:11:01.820197 228913 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20470-2310/.minikube/ca.pem
I0329 17:11:01.820287 228913 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20470-2310/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20470-2310/.minikube/ca.pem (1082 bytes)
I0329 17:11:01.820444 228913 exec_runner.go:144] found /home/jenkins/minikube-integration/20470-2310/.minikube/cert.pem, removing ...
I0329 17:11:01.820455 228913 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20470-2310/.minikube/cert.pem
I0329 17:11:01.820487 228913 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20470-2310/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20470-2310/.minikube/cert.pem (1123 bytes)
I0329 17:11:01.820543 228913 exec_runner.go:144] found /home/jenkins/minikube-integration/20470-2310/.minikube/key.pem, removing ...
I0329 17:11:01.820552 228913 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20470-2310/.minikube/key.pem
I0329 17:11:01.820576 228913 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20470-2310/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20470-2310/.minikube/key.pem (1679 bytes)
I0329 17:11:01.820633 228913 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20470-2310/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20470-2310/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20470-2310/.minikube/certs/ca-key.pem org=jenkins.embed-certs-728937 san=[127.0.0.1 192.168.85.2 embed-certs-728937 localhost minikube]
I0329 17:11:02.123785 228913 provision.go:177] copyRemoteCerts
I0329 17:11:02.123879 228913 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0329 17:11:02.123939 228913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-728937
I0329 17:11:02.150511 228913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/20470-2310/.minikube/machines/embed-certs-728937/id_rsa Username:docker}
I0329 17:11:02.248815 228913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2310/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0329 17:11:02.277466 228913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2310/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
I0329 17:11:02.304104 228913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2310/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0329 17:11:02.330270 228913 provision.go:87] duration metric: took 530.369972ms to configureAuth
I0329 17:11:02.330297 228913 ubuntu.go:193] setting minikube options for container-runtime
I0329 17:11:02.330506 228913 config.go:182] Loaded profile config "embed-certs-728937": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0329 17:11:02.330515 228913 machine.go:96] duration metric: took 3.996507707s to provisionDockerMachine
I0329 17:11:02.330523 228913 start.go:293] postStartSetup for "embed-certs-728937" (driver="docker")
I0329 17:11:02.330533 228913 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0329 17:11:02.330581 228913 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0329 17:11:02.330647 228913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-728937
I0329 17:11:02.349006 228913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/20470-2310/.minikube/machines/embed-certs-728937/id_rsa Username:docker}
I0329 17:11:02.440594 228913 ssh_runner.go:195] Run: cat /etc/os-release
I0329 17:11:02.443780 228913 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0329 17:11:02.443820 228913 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0329 17:11:02.443832 228913 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0329 17:11:02.443839 228913 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0329 17:11:02.443849 228913 filesync.go:126] Scanning /home/jenkins/minikube-integration/20470-2310/.minikube/addons for local assets ...
I0329 17:11:02.443914 228913 filesync.go:126] Scanning /home/jenkins/minikube-integration/20470-2310/.minikube/files for local assets ...
I0329 17:11:02.443998 228913 filesync.go:149] local asset: /home/jenkins/minikube-integration/20470-2310/.minikube/files/etc/ssl/certs/77812.pem -> 77812.pem in /etc/ssl/certs
I0329 17:11:02.444108 228913 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0329 17:11:02.452787 228913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2310/.minikube/files/etc/ssl/certs/77812.pem --> /etc/ssl/certs/77812.pem (1708 bytes)
I0329 17:11:02.480613 228913 start.go:296] duration metric: took 150.075516ms for postStartSetup
I0329 17:11:02.480727 228913 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0329 17:11:02.480793 228913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-728937
I0329 17:11:02.498708 228913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/20470-2310/.minikube/machines/embed-certs-728937/id_rsa Username:docker}
I0329 17:11:02.592384 228913 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0329 17:11:02.597812 228913 fix.go:56] duration metric: took 4.639987303s for fixHost
I0329 17:11:02.597839 228913 start.go:83] releasing machines lock for "embed-certs-728937", held for 4.640043566s
I0329 17:11:02.597910 228913 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-728937
I0329 17:11:02.618296 228913 ssh_runner.go:195] Run: cat /version.json
I0329 17:11:02.618313 228913 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0329 17:11:02.618347 228913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-728937
I0329 17:11:02.618381 228913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-728937
I0329 17:11:02.640414 228913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/20470-2310/.minikube/machines/embed-certs-728937/id_rsa Username:docker}
I0329 17:11:02.640754 228913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/20470-2310/.minikube/machines/embed-certs-728937/id_rsa Username:docker}
I0329 17:11:02.731268 228913 ssh_runner.go:195] Run: systemctl --version
I0329 17:11:02.869742 228913 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0329 17:11:02.874195 228913 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0329 17:11:02.891998 228913 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0329 17:11:02.892077 228913 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0329 17:11:02.902379 228913 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0329 17:11:02.902408 228913 start.go:498] detecting cgroup driver to use...
I0329 17:11:02.902440 228913 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0329 17:11:02.902496 228913 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0329 17:11:02.915976 228913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0329 17:11:02.930454 228913 docker.go:217] disabling cri-docker service (if available) ...
I0329 17:11:02.930519 228913 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0329 17:11:02.944794 228913 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0329 17:11:02.956202 228913 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0329 17:11:03.039731 228913 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0329 17:11:03.129528 228913 docker.go:233] disabling docker service ...
I0329 17:11:03.129597 228913 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0329 17:11:03.149282 228913 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0329 17:11:03.162049 228913 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0329 17:11:03.252530 228913 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0329 17:11:03.345590 228913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0329 17:11:03.357678 228913 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0329 17:11:03.374787 228913 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0329 17:11:03.386415 228913 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0329 17:11:03.396707 228913 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0329 17:11:03.396827 228913 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0329 17:11:03.407128 228913 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0329 17:11:03.423957 228913 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0329 17:11:03.434127 228913 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0329 17:11:03.444351 228913 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0329 17:11:03.453623 228913 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0329 17:11:03.463491 228913 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0329 17:11:03.473135 228913 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0329 17:11:03.483506 228913 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0329 17:11:03.492264 228913 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0329 17:11:03.501408 228913 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0329 17:11:03.597534 228913 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0329 17:11:03.765730 228913 start.go:545] Will wait 60s for socket path /run/containerd/containerd.sock
I0329 17:11:03.765842 228913 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0329 17:11:03.770204 228913 start.go:566] Will wait 60s for crictl version
I0329 17:11:03.770315 228913 ssh_runner.go:195] Run: which crictl
I0329 17:11:03.773748 228913 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0329 17:11:03.812684 228913 start.go:582] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.25
RuntimeApiVersion: v1
I0329 17:11:03.812768 228913 ssh_runner.go:195] Run: containerd --version
I0329 17:11:03.835649 228913 ssh_runner.go:195] Run: containerd --version
I0329 17:11:03.861961 228913 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.25 ...
I0329 17:11:03.865224 228913 cli_runner.go:164] Run: docker network inspect embed-certs-728937 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0329 17:11:03.880186 228913 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I0329 17:11:03.883859 228913 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0329 17:11:03.894618 228913 kubeadm.go:883] updating cluster {Name:embed-certs-728937 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-728937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0329 17:11:03.894734 228913 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0329 17:11:03.894799 228913 ssh_runner.go:195] Run: sudo crictl images --output json
I0329 17:11:03.938987 228913 containerd.go:627] all images are preloaded for containerd runtime.
I0329 17:11:03.939011 228913 containerd.go:534] Images already preloaded, skipping extraction
I0329 17:11:03.939074 228913 ssh_runner.go:195] Run: sudo crictl images --output json
I0329 17:11:03.984312 228913 containerd.go:627] all images are preloaded for containerd runtime.
I0329 17:11:03.984382 228913 cache_images.go:84] Images are preloaded, skipping loading
I0329 17:11:03.984398 228913 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.32.2 containerd true true} ...
I0329 17:11:03.984542 228913 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-728937 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.32.2 ClusterName:embed-certs-728937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0329 17:11:03.984615 228913 ssh_runner.go:195] Run: sudo crictl info
I0329 17:11:04.030383 228913 cni.go:84] Creating CNI manager for ""
I0329 17:11:04.030410 228913 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0329 17:11:04.030422 228913 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0329 17:11:04.030446 228913 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-728937 NodeName:embed-certs-728937 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0329 17:11:04.030567 228913 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "embed-certs-728937"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.85.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0329 17:11:04.030647 228913 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
I0329 17:11:04.040174 228913 binaries.go:44] Found k8s binaries, skipping transfer
I0329 17:11:04.040289 228913 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0329 17:11:04.049442 228913 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
I0329 17:11:04.067233 228913 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0329 17:11:04.087203 228913 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
I0329 17:11:04.106908 228913 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I0329 17:11:04.110753 228913 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0329 17:11:04.121591 228913 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0329 17:11:04.203295 228913 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0329 17:11:04.217802 228913 certs.go:68] Setting up /home/jenkins/minikube-integration/20470-2310/.minikube/profiles/embed-certs-728937 for IP: 192.168.85.2
I0329 17:11:04.217822 228913 certs.go:194] generating shared ca certs ...
I0329 17:11:04.217838 228913 certs.go:226] acquiring lock for ca certs: {Name:mkd8f35c7fbd9d32ba41be2af2d591b6aa6cf234 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0329 17:11:04.217981 228913 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20470-2310/.minikube/ca.key
I0329 17:11:04.218038 228913 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20470-2310/.minikube/proxy-client-ca.key
I0329 17:11:04.218051 228913 certs.go:256] generating profile certs ...
I0329 17:11:04.218140 228913 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20470-2310/.minikube/profiles/embed-certs-728937/client.key
I0329 17:11:04.218207 228913 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20470-2310/.minikube/profiles/embed-certs-728937/apiserver.key.1f09a27c
I0329 17:11:04.218251 228913 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20470-2310/.minikube/profiles/embed-certs-728937/proxy-client.key
I0329 17:11:04.218380 228913 certs.go:484] found cert: /home/jenkins/minikube-integration/20470-2310/.minikube/certs/7781.pem (1338 bytes)
W0329 17:11:04.218416 228913 certs.go:480] ignoring /home/jenkins/minikube-integration/20470-2310/.minikube/certs/7781_empty.pem, impossibly tiny 0 bytes
I0329 17:11:04.218429 228913 certs.go:484] found cert: /home/jenkins/minikube-integration/20470-2310/.minikube/certs/ca-key.pem (1675 bytes)
I0329 17:11:04.218454 228913 certs.go:484] found cert: /home/jenkins/minikube-integration/20470-2310/.minikube/certs/ca.pem (1082 bytes)
I0329 17:11:04.218481 228913 certs.go:484] found cert: /home/jenkins/minikube-integration/20470-2310/.minikube/certs/cert.pem (1123 bytes)
I0329 17:11:04.218506 228913 certs.go:484] found cert: /home/jenkins/minikube-integration/20470-2310/.minikube/certs/key.pem (1679 bytes)
I0329 17:11:04.218554 228913 certs.go:484] found cert: /home/jenkins/minikube-integration/20470-2310/.minikube/files/etc/ssl/certs/77812.pem (1708 bytes)
I0329 17:11:04.219156 228913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2310/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0329 17:11:04.253056 228913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2310/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0329 17:11:04.281348 228913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2310/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0329 17:11:04.319201 228913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2310/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0329 17:11:04.354911 228913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2310/.minikube/profiles/embed-certs-728937/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
I0329 17:11:04.391337 228913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2310/.minikube/profiles/embed-certs-728937/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0329 17:11:04.422807 228913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2310/.minikube/profiles/embed-certs-728937/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0329 17:11:04.453317 228913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2310/.minikube/profiles/embed-certs-728937/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0329 17:11:04.476849 228913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2310/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0329 17:11:04.503158 228913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2310/.minikube/certs/7781.pem --> /usr/share/ca-certificates/7781.pem (1338 bytes)
I0329 17:11:04.530077 228913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2310/.minikube/files/etc/ssl/certs/77812.pem --> /usr/share/ca-certificates/77812.pem (1708 bytes)
I0329 17:11:04.565466 228913 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0329 17:11:04.583972 228913 ssh_runner.go:195] Run: openssl version
I0329 17:11:04.589626 228913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77812.pem && ln -fs /usr/share/ca-certificates/77812.pem /etc/ssl/certs/77812.pem"
I0329 17:11:04.599085 228913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77812.pem
I0329 17:11:04.602613 228913 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 29 16:26 /usr/share/ca-certificates/77812.pem
I0329 17:11:04.602676 228913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77812.pem
I0329 17:11:04.609651 228913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/77812.pem /etc/ssl/certs/3ec20f2e.0"
I0329 17:11:04.618346 228913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0329 17:11:04.627971 228913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0329 17:11:04.631695 228913 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 29 16:18 /usr/share/ca-certificates/minikubeCA.pem
I0329 17:11:04.631806 228913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0329 17:11:04.639001 228913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0329 17:11:04.649471 228913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7781.pem && ln -fs /usr/share/ca-certificates/7781.pem /etc/ssl/certs/7781.pem"
I0329 17:11:04.659422 228913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7781.pem
I0329 17:11:04.662996 228913 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 29 16:26 /usr/share/ca-certificates/7781.pem
I0329 17:11:04.663081 228913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7781.pem
I0329 17:11:04.670293 228913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7781.pem /etc/ssl/certs/51391683.0"
I0329 17:11:04.679081 228913 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0329 17:11:04.682895 228913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0329 17:11:04.689720 228913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0329 17:11:04.696358 228913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0329 17:11:04.703127 228913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0329 17:11:04.710024 228913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0329 17:11:04.718083 228913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0329 17:11:04.725063 228913 kubeadm.go:392] StartCluster: {Name:embed-certs-728937 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-728937 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkin
s:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0329 17:11:04.725193 228913 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0329 17:11:04.725275 228913 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0329 17:11:04.767032 228913 cri.go:89] found id: "b97913348658d2b992f38c528db9407ed36d778bc417ed79785f70d9b433d4b5"
I0329 17:11:04.767063 228913 cri.go:89] found id: "3a5564e96faa0a59ef7f8b3fc6248d85cf105bb007c5b05bb76cba5f7e8db1f4"
I0329 17:11:04.767077 228913 cri.go:89] found id: "30721d32d6c150790add97f5140b82bc9c7a29aafc5f3a88103f6a964b5d046b"
I0329 17:11:04.767082 228913 cri.go:89] found id: "d76ea8187a620987b8a973f6fad5fc0d6b20da31eedf84f453255ab595730701"
I0329 17:11:04.767085 228913 cri.go:89] found id: "9f43aab484797ebf3462b06bdfb3c9a33458d47bffa8fc479c2656b8208c9efd"
I0329 17:11:04.767089 228913 cri.go:89] found id: "48e2d7d12ab16fbd27cba6a9f61ae4b7cd2d50b7466ff57e327c5141e68e0e28"
I0329 17:11:04.767094 228913 cri.go:89] found id: "e331c94d79ff00021047d99a9a842e587c27c7f46879a4ccec5665828e17d457"
I0329 17:11:04.767097 228913 cri.go:89] found id: "10e01bac89e69e974a8f0f79b40bcaa1bc7e9e94a197852e9f37c8a14b33bdba"
I0329 17:11:04.767100 228913 cri.go:89] found id: ""
I0329 17:11:04.767158 228913 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
W0329 17:11:04.787896 228913 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-03-29T17:11:04Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I0329 17:11:04.788039 228913 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0329 17:11:04.798633 228913 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0329 17:11:04.798676 228913 kubeadm.go:593] restartPrimaryControlPlane start ...
I0329 17:11:04.798743 228913 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0329 17:11:04.811089 228913 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0329 17:11:04.811773 228913 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-728937" does not appear in /home/jenkins/minikube-integration/20470-2310/kubeconfig
I0329 17:11:04.812024 228913 kubeconfig.go:62] /home/jenkins/minikube-integration/20470-2310/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-728937" cluster setting kubeconfig missing "embed-certs-728937" context setting]
I0329 17:11:04.812464 228913 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20470-2310/kubeconfig: {Name:mk67c59b90eac0925d283f0bd0edd038ba6c7c26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0329 17:11:04.813745 228913 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0329 17:11:04.831398 228913 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
I0329 17:11:04.831431 228913 kubeadm.go:597] duration metric: took 32.748037ms to restartPrimaryControlPlane
I0329 17:11:04.831441 228913 kubeadm.go:394] duration metric: took 106.388343ms to StartCluster
I0329 17:11:04.831455 228913 settings.go:142] acquiring lock: {Name:mk0e5c956c90ea91a9d840799eff947964a7a98c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0329 17:11:04.831514 228913 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20470-2310/kubeconfig
I0329 17:11:04.832759 228913 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20470-2310/kubeconfig: {Name:mk67c59b90eac0925d283f0bd0edd038ba6c7c26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0329 17:11:04.832975 228913 start.go:238] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0329 17:11:04.833322 228913 config.go:182] Loaded profile config "embed-certs-728937": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0329 17:11:04.833314 228913 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0329 17:11:04.833396 228913 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-728937"
I0329 17:11:04.833412 228913 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-728937"
W0329 17:11:04.833418 228913 addons.go:247] addon storage-provisioner should already be in state true
I0329 17:11:04.833441 228913 host.go:66] Checking if "embed-certs-728937" exists ...
I0329 17:11:04.833908 228913 cli_runner.go:164] Run: docker container inspect embed-certs-728937 --format={{.State.Status}}
I0329 17:11:04.834048 228913 addons.go:69] Setting default-storageclass=true in profile "embed-certs-728937"
I0329 17:11:04.834064 228913 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-728937"
I0329 17:11:04.834295 228913 cli_runner.go:164] Run: docker container inspect embed-certs-728937 --format={{.State.Status}}
I0329 17:11:04.835937 228913 addons.go:69] Setting dashboard=true in profile "embed-certs-728937"
I0329 17:11:04.835967 228913 addons.go:238] Setting addon dashboard=true in "embed-certs-728937"
W0329 17:11:04.835975 228913 addons.go:247] addon dashboard should already be in state true
I0329 17:11:04.836001 228913 host.go:66] Checking if "embed-certs-728937" exists ...
I0329 17:11:04.836005 228913 addons.go:69] Setting metrics-server=true in profile "embed-certs-728937"
I0329 17:11:04.836065 228913 addons.go:238] Setting addon metrics-server=true in "embed-certs-728937"
W0329 17:11:04.836089 228913 addons.go:247] addon metrics-server should already be in state true
I0329 17:11:04.836143 228913 host.go:66] Checking if "embed-certs-728937" exists ...
I0329 17:11:04.836447 228913 cli_runner.go:164] Run: docker container inspect embed-certs-728937 --format={{.State.Status}}
I0329 17:11:04.836800 228913 cli_runner.go:164] Run: docker container inspect embed-certs-728937 --format={{.State.Status}}
I0329 17:11:04.837420 228913 out.go:177] * Verifying Kubernetes components...
I0329 17:11:04.842390 228913 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0329 17:11:04.908987 228913 addons.go:238] Setting addon default-storageclass=true in "embed-certs-728937"
W0329 17:11:04.909025 228913 addons.go:247] addon default-storageclass should already be in state true
I0329 17:11:04.909056 228913 host.go:66] Checking if "embed-certs-728937" exists ...
I0329 17:11:04.909564 228913 cli_runner.go:164] Run: docker container inspect embed-certs-728937 --format={{.State.Status}}
I0329 17:11:04.930857 228913 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0329 17:11:04.930906 228913 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0329 17:11:04.933906 228913 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0329 17:11:04.933906 228913 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0329 17:11:04.934006 228913 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0329 17:11:04.934100 228913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-728937
I0329 17:11:04.938951 228913 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0329 17:11:04.939031 228913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0329 17:11:04.939133 228913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-728937
I0329 17:11:04.941940 228913 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0329 17:11:05.822920 215882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0329 17:11:05.834495 215882 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I0329 17:11:05.838187 215882 out.go:201]
W0329 17:11:05.841303 215882 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0329 17:11:05.841442 215882 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
W0329 17:11:05.841477 215882 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
W0329 17:11:05.841491 215882 out.go:270] *
W0329 17:11:05.842469 215882 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0329 17:11:05.844500 215882 out.go:201]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
14537e00fa900 523cad1a4df73 2 minutes ago Exited dashboard-metrics-scraper 5 02aef421fb85c dashboard-metrics-scraper-8d5bb5db8-5v95z
7170455291cf3 ba04bb24b9575 4 minutes ago Running storage-provisioner 2 efc3128c401b0 storage-provisioner
9affe02cbdfd8 20b332c9a70d8 5 minutes ago Running kubernetes-dashboard 0 89be080922d8d kubernetes-dashboard-cd95d586-r7dkj
08cd64e5efe8f ee75e27fff91c 5 minutes ago Running kindnet-cni 1 a9707aa729efa kindnet-wzljb
c332b5510a0e6 ba04bb24b9575 5 minutes ago Exited storage-provisioner 1 efc3128c401b0 storage-provisioner
476f8a391def9 25a5233254979 5 minutes ago Running kube-proxy 1 a6abf7916e8de kube-proxy-wcgkr
b1d8c9bdcf512 db91994f4ee8f 5 minutes ago Running coredns 1 9787c654023d5 coredns-74ff55c5b-svwdf
b109a27aefed9 1611cd07b61d5 5 minutes ago Running busybox 1 244e3c28f9705 busybox
476bef2c0ac3d 1df8a2b116bd1 5 minutes ago Running kube-controller-manager 1 41b22b9a8bc1d kube-controller-manager-old-k8s-version-469910
1d665425da324 2c08bbbc02d3a 5 minutes ago Running kube-apiserver 1 a758f57c350e4 kube-apiserver-old-k8s-version-469910
e8ee39792992c 05b738aa1bc63 5 minutes ago Running etcd 1 b46f51f3e231b etcd-old-k8s-version-469910
bfdb0b4297e36 e7605f88f17d6 5 minutes ago Running kube-scheduler 1 9fbbd2131da52 kube-scheduler-old-k8s-version-469910
428fec5581f73 1611cd07b61d5 6 minutes ago Exited busybox 0 227a351ac67b6 busybox
a5dfee4a506c7 db91994f4ee8f 8 minutes ago Exited coredns 0 87948e1a62007 coredns-74ff55c5b-svwdf
d73c4b171565d ee75e27fff91c 8 minutes ago Exited kindnet-cni 0 0e286387f21b5 kindnet-wzljb
2bb6df8707154 25a5233254979 8 minutes ago Exited kube-proxy 0 8fc1604f7c69a kube-proxy-wcgkr
45b6c3befe403 05b738aa1bc63 8 minutes ago Exited etcd 0 77d899be73258 etcd-old-k8s-version-469910
5844d741e22ff e7605f88f17d6 8 minutes ago Exited kube-scheduler 0 672d0910262ca kube-scheduler-old-k8s-version-469910
e4211e5b58844 2c08bbbc02d3a 8 minutes ago Exited kube-apiserver 0 842a5cd139b8a kube-apiserver-old-k8s-version-469910
fdf64c9da80b1 1df8a2b116bd1 8 minutes ago Exited kube-controller-manager 0 bc0db483458d9 kube-controller-manager-old-k8s-version-469910
==> containerd <==
Mar 29 17:06:59 old-k8s-version-469910 containerd[569]: time="2025-03-29T17:06:59.934868648Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Mar 29 17:07:16 old-k8s-version-469910 containerd[569]: time="2025-03-29T17:07:16.930955437Z" level=info msg="CreateContainer within sandbox \"02aef421fb85c8e0273b0e5008b331b2ab46c83586927c96a037efc8089fd531\" for container name:\"dashboard-metrics-scraper\" attempt:4"
Mar 29 17:07:16 old-k8s-version-469910 containerd[569]: time="2025-03-29T17:07:16.965377554Z" level=info msg="CreateContainer within sandbox \"02aef421fb85c8e0273b0e5008b331b2ab46c83586927c96a037efc8089fd531\" for name:\"dashboard-metrics-scraper\" attempt:4 returns container id \"5663a219b86739ca59951a40def0499436283c49a1b531327aeb40539502d7b9\""
Mar 29 17:07:16 old-k8s-version-469910 containerd[569]: time="2025-03-29T17:07:16.967529057Z" level=info msg="StartContainer for \"5663a219b86739ca59951a40def0499436283c49a1b531327aeb40539502d7b9\""
Mar 29 17:07:17 old-k8s-version-469910 containerd[569]: time="2025-03-29T17:07:17.048296911Z" level=info msg="StartContainer for \"5663a219b86739ca59951a40def0499436283c49a1b531327aeb40539502d7b9\" returns successfully"
Mar 29 17:07:17 old-k8s-version-469910 containerd[569]: time="2025-03-29T17:07:17.048500800Z" level=info msg="received exit event container_id:\"5663a219b86739ca59951a40def0499436283c49a1b531327aeb40539502d7b9\" id:\"5663a219b86739ca59951a40def0499436283c49a1b531327aeb40539502d7b9\" pid:3007 exit_status:255 exited_at:{seconds:1743268037 nanos:48188505}"
Mar 29 17:07:17 old-k8s-version-469910 containerd[569]: time="2025-03-29T17:07:17.080308327Z" level=info msg="shim disconnected" id=5663a219b86739ca59951a40def0499436283c49a1b531327aeb40539502d7b9 namespace=k8s.io
Mar 29 17:07:17 old-k8s-version-469910 containerd[569]: time="2025-03-29T17:07:17.080376988Z" level=warning msg="cleaning up after shim disconnected" id=5663a219b86739ca59951a40def0499436283c49a1b531327aeb40539502d7b9 namespace=k8s.io
Mar 29 17:07:17 old-k8s-version-469910 containerd[569]: time="2025-03-29T17:07:17.080388590Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Mar 29 17:07:17 old-k8s-version-469910 containerd[569]: time="2025-03-29T17:07:17.580807638Z" level=info msg="RemoveContainer for \"6a74d384e1eef93335abc7050692e137727921388e0bb07996e7f422738fef60\""
Mar 29 17:07:17 old-k8s-version-469910 containerd[569]: time="2025-03-29T17:07:17.589763557Z" level=info msg="RemoveContainer for \"6a74d384e1eef93335abc7050692e137727921388e0bb07996e7f422738fef60\" returns successfully"
Mar 29 17:08:25 old-k8s-version-469910 containerd[569]: time="2025-03-29T17:08:25.925145318Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 29 17:08:25 old-k8s-version-469910 containerd[569]: time="2025-03-29T17:08:25.933065869Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
Mar 29 17:08:25 old-k8s-version-469910 containerd[569]: time="2025-03-29T17:08:25.935125529Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Mar 29 17:08:25 old-k8s-version-469910 containerd[569]: time="2025-03-29T17:08:25.935264876Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Mar 29 17:08:48 old-k8s-version-469910 containerd[569]: time="2025-03-29T17:08:48.926934282Z" level=info msg="CreateContainer within sandbox \"02aef421fb85c8e0273b0e5008b331b2ab46c83586927c96a037efc8089fd531\" for container name:\"dashboard-metrics-scraper\" attempt:5"
Mar 29 17:08:48 old-k8s-version-469910 containerd[569]: time="2025-03-29T17:08:48.958726924Z" level=info msg="CreateContainer within sandbox \"02aef421fb85c8e0273b0e5008b331b2ab46c83586927c96a037efc8089fd531\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"14537e00fa900766000b853d81b8b0760c1ecee044b5c020aee612c3e5d68c71\""
Mar 29 17:08:48 old-k8s-version-469910 containerd[569]: time="2025-03-29T17:08:48.959315979Z" level=info msg="StartContainer for \"14537e00fa900766000b853d81b8b0760c1ecee044b5c020aee612c3e5d68c71\""
Mar 29 17:08:49 old-k8s-version-469910 containerd[569]: time="2025-03-29T17:08:49.033893577Z" level=info msg="received exit event container_id:\"14537e00fa900766000b853d81b8b0760c1ecee044b5c020aee612c3e5d68c71\" id:\"14537e00fa900766000b853d81b8b0760c1ecee044b5c020aee612c3e5d68c71\" pid:3264 exit_status:255 exited_at:{seconds:1743268129 nanos:33573233}"
Mar 29 17:08:49 old-k8s-version-469910 containerd[569]: time="2025-03-29T17:08:49.034148166Z" level=info msg="StartContainer for \"14537e00fa900766000b853d81b8b0760c1ecee044b5c020aee612c3e5d68c71\" returns successfully"
Mar 29 17:08:49 old-k8s-version-469910 containerd[569]: time="2025-03-29T17:08:49.060608243Z" level=info msg="shim disconnected" id=14537e00fa900766000b853d81b8b0760c1ecee044b5c020aee612c3e5d68c71 namespace=k8s.io
Mar 29 17:08:49 old-k8s-version-469910 containerd[569]: time="2025-03-29T17:08:49.060676034Z" level=warning msg="cleaning up after shim disconnected" id=14537e00fa900766000b853d81b8b0760c1ecee044b5c020aee612c3e5d68c71 namespace=k8s.io
Mar 29 17:08:49 old-k8s-version-469910 containerd[569]: time="2025-03-29T17:08:49.060686216Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Mar 29 17:08:49 old-k8s-version-469910 containerd[569]: time="2025-03-29T17:08:49.807954341Z" level=info msg="RemoveContainer for \"5663a219b86739ca59951a40def0499436283c49a1b531327aeb40539502d7b9\""
Mar 29 17:08:49 old-k8s-version-469910 containerd[569]: time="2025-03-29T17:08:49.814660734Z" level=info msg="RemoveContainer for \"5663a219b86739ca59951a40def0499436283c49a1b531327aeb40539502d7b9\" returns successfully"
==> coredns [a5dfee4a506c778834f309546822471bfe29cb70606442c7cda067bc889ec4e8] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:42960 - 44763 "HINFO IN 7797894344718508090.8955861492053321037. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015199876s
==> coredns [b1d8c9bdcf51275b8ee260ff27008e50a288380eee492e2ea65a17eecd633a05] <==
[INFO] plugin/ready: Still waiting on: "kubernetes"
.:53
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:55590 - 20042 "HINFO IN 5483447167202158062.2869195030911913677. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.062352564s
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
I0329 17:05:57.306439 1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-03-29 17:05:27.305231185 +0000 UTC m=+0.056645735) (total time: 30.000557668s):
Trace[2019727887]: [30.000557668s] [30.000557668s] END
E0329 17:05:57.306485 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0329 17:05:57.306971 1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-03-29 17:05:27.306720266 +0000 UTC m=+0.058134824) (total time: 30.000233085s):
Trace[939984059]: [30.000233085s] [30.000233085s] END
E0329 17:05:57.307025 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0329 17:05:57.307266 1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-03-29 17:05:27.306991353 +0000 UTC m=+0.058405903) (total time: 30.000260622s):
Trace[911902081]: [30.000260622s] [30.000260622s] END
E0329 17:05:57.307319 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
==> describe nodes <==
Name: old-k8s-version-469910
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=old-k8s-version-469910
kubernetes.io/os=linux
minikube.k8s.io/commit=9e4fb25ec9c9ec7d3315da8ba61a31fdfa364d77
minikube.k8s.io/name=old-k8s-version-469910
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_03_29T17_02_25_0700
minikube.k8s.io/version=v1.35.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 29 Mar 2025 17:02:22 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: old-k8s-version-469910
AcquireTime: <unset>
RenewTime: Sat, 29 Mar 2025 17:11:06 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 29 Mar 2025 17:06:14 +0000 Sat, 29 Mar 2025 17:02:15 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 29 Mar 2025 17:06:14 +0000 Sat, 29 Mar 2025 17:02:15 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 29 Mar 2025 17:06:14 +0000 Sat, 29 Mar 2025 17:02:15 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 29 Mar 2025 17:06:14 +0000 Sat, 29 Mar 2025 17:02:40 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.76.2
Hostname: old-k8s-version-469910
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022296Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022296Ki
pods: 110
System Info:
Machine ID: 819dedf46adc4ec78ced29ee0197315e
System UUID: 282ff8c5-cd3e-4db9-b5c7-0fb10fc87bee
Boot ID: c2113aca-96f4-463b-a6f6-324539bb3c55
Kernel Version: 5.15.0-1080-aws
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: containerd://1.7.25
Kubelet Version: v1.20.0
Kube-Proxy Version: v1.20.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m35s
kube-system coredns-74ff55c5b-svwdf 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 8m28s
kube-system etcd-old-k8s-version-469910 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 8m35s
kube-system kindnet-wzljb 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) 8m28s
kube-system kube-apiserver-old-k8s-version-469910 250m (12%) 0 (0%) 0 (0%) 0 (0%) 8m35s
kube-system kube-controller-manager-old-k8s-version-469910 200m (10%) 0 (0%) 0 (0%) 0 (0%) 8m35s
kube-system kube-proxy-wcgkr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m28s
kube-system kube-scheduler-old-k8s-version-469910 100m (5%) 0 (0%) 0 (0%) 0 (0%) 8m35s
kube-system metrics-server-9975d5f86-k7v6g 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 6m24s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m26s
kubernetes-dashboard dashboard-metrics-scraper-8d5bb5db8-5v95z 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m26s
kubernetes-dashboard kubernetes-dashboard-cd95d586-r7dkj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m26s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 100m (5%)
memory 420Mi (5%) 220Mi (2%)
ephemeral-storage 100Mi (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 8m55s (x4 over 8m55s) kubelet Node old-k8s-version-469910 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m55s (x4 over 8m55s) kubelet Node old-k8s-version-469910 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m55s (x4 over 8m55s) kubelet Node old-k8s-version-469910 status is now: NodeHasSufficientPID
Normal Starting 8m35s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 8m35s kubelet Node old-k8s-version-469910 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m35s kubelet Node old-k8s-version-469910 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m35s kubelet Node old-k8s-version-469910 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8m35s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 8m28s kubelet Node old-k8s-version-469910 status is now: NodeReady
Normal Starting 8m26s kube-proxy Starting kube-proxy.
Normal Starting 5m56s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 5m56s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 5m55s (x7 over 5m56s) kubelet Node old-k8s-version-469910 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m55s (x7 over 5m56s) kubelet Node old-k8s-version-469910 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m55s (x9 over 5m56s) kubelet Node old-k8s-version-469910 status is now: NodeHasSufficientPID
Normal Starting 5m41s kube-proxy Starting kube-proxy.
==> dmesg <==
[Mar29 15:17] ACPI: SRAT not present
[ +0.000000] ACPI: SRAT not present
[ +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
[ +0.014257] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[ +0.457047] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[ +0.025896] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
[ +0.670031] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
[ +6.128829] kauditd_printk_skb: 36 callbacks suppressed
[Mar29 16:29] FS-Cache: Duplicate cookie detected
[ +0.000687] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
[ +0.000956] FS-Cache: O-cookie d=000000006f45aed7{9P.session} n=000000006d0db1b7
[ +0.001115] FS-Cache: O-key=[10] '34323935393635303634'
[ +0.000742] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
[ +0.000907] FS-Cache: N-cookie d=000000006f45aed7{9P.session} n=00000000c72e4d92
[ +0.001069] FS-Cache: N-key=[10] '34323935393635303634'
[Mar29 16:55] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
[Mar29 16:56] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
==> etcd [45b6c3befe403c57c64897456aea1d1f627af2619be6ae801ddef2b592be0f0b] <==
raft2025/03/29 17:02:15 INFO: ea7e25599daad906 became candidate at term 2
raft2025/03/29 17:02:15 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
raft2025/03/29 17:02:15 INFO: ea7e25599daad906 became leader at term 2
raft2025/03/29 17:02:15 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
2025-03-29 17:02:15.574125 I | etcdserver: published {Name:old-k8s-version-469910 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
2025-03-29 17:02:15.574244 I | embed: ready to serve client requests
2025-03-29 17:02:15.583596 I | embed: serving client requests on 192.168.76.2:2379
2025-03-29 17:02:15.585766 I | etcdserver: setting up the initial cluster version to 3.4
2025-03-29 17:02:15.586166 I | embed: ready to serve client requests
2025-03-29 17:02:15.596397 I | embed: serving client requests on 127.0.0.1:2379
2025-03-29 17:02:15.620033 N | etcdserver/membership: set the initial cluster version to 3.4
2025-03-29 17:02:15.620332 I | etcdserver/api: enabled capabilities for version 3.4
2025-03-29 17:02:41.452105 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:02:46.554464 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:02:56.555319 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:03:06.554437 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:03:16.554376 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:03:26.554603 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:03:36.554628 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:03:46.554483 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:03:56.554546 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:04:06.554426 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:04:16.554496 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:04:26.554468 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:04:36.554381 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> etcd [e8ee39792992c3690c7a1594f566f69559236cf0a1ffa535c6ae2e183727988d] <==
2025-03-29 17:07:07.405459 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:07:17.405669 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:07:27.405487 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:07:37.405324 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:07:47.405547 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:07:57.405516 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:08:07.405619 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:08:17.405393 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:08:27.405565 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:08:37.405499 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:08:47.405452 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:08:57.405474 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:09:07.405420 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:09:17.405617 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:09:27.405411 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:09:37.405441 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:09:47.405496 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:09:57.405817 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:10:07.405576 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:10:17.405604 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:10:27.405474 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:10:37.405509 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:10:47.406843 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:10:57.405552 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:11:07.405417 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> kernel <==
17:11:08 up 1:53, 0 users, load average: 2.55, 2.29, 2.61
Linux old-k8s-version-469910 5.15.0-1080-aws #87~20.04.1-Ubuntu SMP Tue Mar 4 10:57:22 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kindnet [08cd64e5efe8ff52775b40477442e26a8cdba2671752a9c947ffb54af011e505] <==
I0329 17:09:08.463574 1 main.go:301] handling current node
I0329 17:09:18.463471 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0329 17:09:18.463506 1 main.go:301] handling current node
I0329 17:09:28.456280 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0329 17:09:28.456324 1 main.go:301] handling current node
I0329 17:09:38.463496 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0329 17:09:38.463533 1 main.go:301] handling current node
I0329 17:09:48.463449 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0329 17:09:48.463487 1 main.go:301] handling current node
I0329 17:09:58.457287 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0329 17:09:58.457572 1 main.go:301] handling current node
I0329 17:10:08.463487 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0329 17:10:08.463522 1 main.go:301] handling current node
I0329 17:10:18.464718 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0329 17:10:18.464753 1 main.go:301] handling current node
I0329 17:10:28.457014 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0329 17:10:28.457114 1 main.go:301] handling current node
I0329 17:10:38.463507 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0329 17:10:38.463545 1 main.go:301] handling current node
I0329 17:10:48.463469 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0329 17:10:48.463506 1 main.go:301] handling current node
I0329 17:10:58.465385 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0329 17:10:58.465607 1 main.go:301] handling current node
I0329 17:11:08.463468 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0329 17:11:08.463501 1 main.go:301] handling current node
==> kindnet [d73c4b171565d73ace6405634046441c894c4d53c7ca6b54394fc1f03f94bf95] <==
I0329 17:02:44.432950 1 controller.go:401] Syncing nftables rules
I0329 17:02:54.155446 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0329 17:02:54.155501 1 main.go:301] handling current node
I0329 17:03:04.148462 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0329 17:03:04.148513 1 main.go:301] handling current node
I0329 17:03:14.157560 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0329 17:03:14.157603 1 main.go:301] handling current node
I0329 17:03:24.157656 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0329 17:03:24.157691 1 main.go:301] handling current node
I0329 17:03:34.150374 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0329 17:03:34.150430 1 main.go:301] handling current node
I0329 17:03:44.148857 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0329 17:03:44.148894 1 main.go:301] handling current node
I0329 17:03:54.151420 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0329 17:03:54.151465 1 main.go:301] handling current node
I0329 17:04:04.148488 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0329 17:04:04.148534 1 main.go:301] handling current node
I0329 17:04:14.148155 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0329 17:04:14.148196 1 main.go:301] handling current node
I0329 17:04:24.152990 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0329 17:04:24.153026 1 main.go:301] handling current node
I0329 17:04:34.148258 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0329 17:04:34.148288 1 main.go:301] handling current node
I0329 17:04:44.152719 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0329 17:04:44.152805 1 main.go:301] handling current node
==> kube-apiserver [1d665425da32470bd2630153351ec20d276fad049da85eaa1032a6def1a1deff] <==
I0329 17:07:47.934815 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0329 17:07:47.934825 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0329 17:08:25.669014 1 client.go:360] parsed scheme: "passthrough"
I0329 17:08:25.669058 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0329 17:08:25.669218 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0329 17:08:27.832434 1 handler_proxy.go:102] no RequestInfo found in the context
E0329 17:08:27.832534 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0329 17:08:27.832571 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0329 17:08:59.284294 1 client.go:360] parsed scheme: "passthrough"
I0329 17:08:59.284508 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0329 17:08:59.284644 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0329 17:09:33.610583 1 client.go:360] parsed scheme: "passthrough"
I0329 17:09:33.610629 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0329 17:09:33.610639 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0329 17:10:17.307909 1 client.go:360] parsed scheme: "passthrough"
I0329 17:10:17.308073 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0329 17:10:17.308131 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0329 17:10:24.903799 1 handler_proxy.go:102] no RequestInfo found in the context
E0329 17:10:24.903875 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0329 17:10:24.903895 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0329 17:11:01.001445 1 client.go:360] parsed scheme: "passthrough"
I0329 17:11:01.001502 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0329 17:11:01.001511 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-apiserver [e4211e5b58844aac9df57b0f72dd7ef968a74d18917ec3d0dc6bca362a5d010f] <==
I0329 17:02:22.857116 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0329 17:02:23.339748 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0329 17:02:23.384200 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0329 17:02:23.533649 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
I0329 17:02:23.534760 1 controller.go:606] quota admission added evaluator for: endpoints
I0329 17:02:23.541886 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0329 17:02:24.511598 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0329 17:02:24.908912 1 controller.go:606] quota admission added evaluator for: deployments.apps
I0329 17:02:24.983844 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0329 17:02:33.349518 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0329 17:02:40.513215 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0329 17:02:40.550908 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0329 17:02:50.911220 1 client.go:360] parsed scheme: "passthrough"
I0329 17:02:50.911264 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0329 17:02:50.911273 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0329 17:03:30.742044 1 client.go:360] parsed scheme: "passthrough"
I0329 17:03:30.742087 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0329 17:03:30.742096 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0329 17:04:03.748662 1 client.go:360] parsed scheme: "passthrough"
I0329 17:04:03.748710 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0329 17:04:03.748719 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0329 17:04:41.396762 1 client.go:360] parsed scheme: "passthrough"
I0329 17:04:41.396812 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0329 17:04:41.396822 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
E0329 17:04:43.813938 1 available_controller.go:508] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
==> kube-controller-manager [476bef2c0ac3db5deaf24b4ec3339f3ab38a00eace7a38e235325e571ed11ea1] <==
E0329 17:06:43.603343 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0329 17:06:49.380183 1 request.go:655] Throttling request took 1.048169646s, request: GET:https://192.168.76.2:8443/apis/coordination.k8s.io/v1?timeout=32s
W0329 17:06:50.231636 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0329 17:07:14.105419 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0329 17:07:21.882132 1 request.go:655] Throttling request took 1.04836279s, request: GET:https://192.168.76.2:8443/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s
W0329 17:07:22.733498 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0329 17:07:44.607255 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0329 17:07:54.384391 1 request.go:655] Throttling request took 1.048271289s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0329 17:07:55.236178 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0329 17:08:15.109220 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0329 17:08:26.886746 1 request.go:655] Throttling request took 1.048434915s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1?timeout=32s
W0329 17:08:27.738124 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0329 17:08:45.611118 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0329 17:08:59.388475 1 request.go:655] Throttling request took 1.048432812s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
W0329 17:09:00.240059 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0329 17:09:16.112901 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0329 17:09:31.890659 1 request.go:655] Throttling request took 1.048623858s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0329 17:09:32.741791 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0329 17:09:46.614803 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0329 17:10:04.392317 1 request.go:655] Throttling request took 1.047993418s, request: GET:https://192.168.76.2:8443/apis/batch/v1beta1?timeout=32s
W0329 17:10:05.244006 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0329 17:10:17.117142 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0329 17:10:36.894377 1 request.go:655] Throttling request took 1.047872902s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0329 17:10:37.745886 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0329 17:10:47.619045 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
==> kube-controller-manager [fdf64c9da80b16dd615c8a65bf20e7dbdac57ddd63ab9ea71557f869e4214e70] <==
I0329 17:02:40.578022 1 shared_informer.go:247] Caches are synced for disruption
I0329 17:02:40.578051 1 disruption.go:339] Sending events to api server.
I0329 17:02:40.578338 1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-469910" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0329 17:02:40.598525 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-pb79h"
I0329 17:02:40.626458 1 shared_informer.go:247] Caches are synced for stateful set
I0329 17:02:40.626508 1 shared_informer.go:247] Caches are synced for PVC protection
I0329 17:02:40.638818 1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-wzljb"
I0329 17:02:40.645102 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-wcgkr"
I0329 17:02:40.645253 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-svwdf"
I0329 17:02:40.676950 1 shared_informer.go:247] Caches are synced for attach detach
I0329 17:02:40.677793 1 shared_informer.go:247] Caches are synced for persistent volume
I0329 17:02:40.678119 1 shared_informer.go:247] Caches are synced for expand
I0329 17:02:40.686919 1 shared_informer.go:247] Caches are synced for resource quota
I0329 17:02:40.728881 1 shared_informer.go:247] Caches are synced for resource quota
E0329 17:02:40.749872 1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"fa427ce9-4957-484a-bb7a-730bcad52c6a", ResourceVersion:"263", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63878864544, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x40017a68c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x40017a68e0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0x40017a6900), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4001e10c00), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40017a6
920), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40017a6940), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40017a6980)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001e1a720), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001e05628), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000610700), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400000e188)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001e05678)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
I0329 17:02:40.835194 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0329 17:02:41.118337 1 shared_informer.go:247] Caches are synced for garbage collector
I0329 17:02:41.118361 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0329 17:02:41.136667 1 shared_informer.go:247] Caches are synced for garbage collector
I0329 17:02:41.864620 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
I0329 17:02:41.892922 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-pb79h"
I0329 17:02:45.532125 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0329 17:04:43.203946 1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
E0329 17:04:43.590650 1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
I0329 17:04:44.296389 1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-9975d5f86-k7v6g"
==> kube-proxy [2bb6df8707154298dfc0cb21f5c505ece8764779eb903346bffb88181622549c] <==
I0329 17:02:42.468837 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I0329 17:02:42.468917 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W0329 17:02:42.516156 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0329 17:02:42.516244 1 server_others.go:185] Using iptables Proxier.
I0329 17:02:42.516460 1 server.go:650] Version: v1.20.0
I0329 17:02:42.516956 1 config.go:315] Starting service config controller
I0329 17:02:42.516974 1 shared_informer.go:240] Waiting for caches to sync for service config
I0329 17:02:42.519350 1 config.go:224] Starting endpoint slice config controller
I0329 17:02:42.519364 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0329 17:02:42.521337 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0329 17:02:42.618366 1 shared_informer.go:247] Caches are synced for service config
==> kube-proxy [476f8a391def914e7a67fa8cc7c10883e946e9e4bdd7cef65ed70f98df3ef191] <==
I0329 17:05:27.613417 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I0329 17:05:27.613481 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W0329 17:05:27.668180 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0329 17:05:27.668307 1 server_others.go:185] Using iptables Proxier.
I0329 17:05:27.668751 1 server.go:650] Version: v1.20.0
I0329 17:05:27.681212 1 config.go:315] Starting service config controller
I0329 17:05:27.681230 1 shared_informer.go:240] Waiting for caches to sync for service config
I0329 17:05:27.681257 1 config.go:224] Starting endpoint slice config controller
I0329 17:05:27.681261 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0329 17:05:27.781359 1 shared_informer.go:247] Caches are synced for service config
I0329 17:05:27.781427 1 shared_informer.go:247] Caches are synced for endpoint slice config
==> kube-scheduler [5844d741e22ff09ae2c803a73a957b04b60d9c6d7c529313cb014e7d6aa2cd2b] <==
W0329 17:02:22.037150 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0329 17:02:22.037246 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0329 17:02:22.084779 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0329 17:02:22.085270 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0329 17:02:22.085290 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0329 17:02:22.085305 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0329 17:02:22.108175 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0329 17:02:22.108419 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 17:02:22.108527 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0329 17:02:22.108616 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0329 17:02:22.108714 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0329 17:02:22.108809 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0329 17:02:22.108898 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0329 17:02:22.108977 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0329 17:02:22.109104 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0329 17:02:22.109186 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0329 17:02:22.109360 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0329 17:02:22.109407 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0329 17:02:22.985221 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0329 17:02:23.007071 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0329 17:02:23.053626 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 17:02:23.101349 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0329 17:02:23.120762 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0329 17:02:23.127783 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
I0329 17:02:25.185400 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kube-scheduler [bfdb0b4297e36c3796a7a426259c1feba5b7a1a067614b3d35556d6bdbbc76ee] <==
I0329 17:05:17.458335 1 serving.go:331] Generated self-signed cert in-memory
W0329 17:05:23.497590 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0329 17:05:23.497825 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0329 17:05:23.497861 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0329 17:05:23.497909 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0329 17:05:23.975520 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0329 17:05:23.975725 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0329 17:05:23.975762 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0329 17:05:23.975798 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0329 17:05:24.177244 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Mar 29 17:09:38 old-k8s-version-469910 kubelet[662]: E0329 17:09:38.926782 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 29 17:09:48 old-k8s-version-469910 kubelet[662]: I0329 17:09:48.928108 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 14537e00fa900766000b853d81b8b0760c1ecee044b5c020aee612c3e5d68c71
Mar 29 17:09:48 old-k8s-version-469910 kubelet[662]: E0329 17:09:48.928853 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
Mar 29 17:09:49 old-k8s-version-469910 kubelet[662]: E0329 17:09:49.925066 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 29 17:10:02 old-k8s-version-469910 kubelet[662]: I0329 17:10:02.925869 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 14537e00fa900766000b853d81b8b0760c1ecee044b5c020aee612c3e5d68c71
Mar 29 17:10:02 old-k8s-version-469910 kubelet[662]: E0329 17:10:02.926703 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
Mar 29 17:10:02 old-k8s-version-469910 kubelet[662]: E0329 17:10:02.928431 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 29 17:10:15 old-k8s-version-469910 kubelet[662]: E0329 17:10:15.925755 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 29 17:10:16 old-k8s-version-469910 kubelet[662]: I0329 17:10:16.924254 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 14537e00fa900766000b853d81b8b0760c1ecee044b5c020aee612c3e5d68c71
Mar 29 17:10:16 old-k8s-version-469910 kubelet[662]: E0329 17:10:16.924947 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
Mar 29 17:10:27 old-k8s-version-469910 kubelet[662]: I0329 17:10:27.924094 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 14537e00fa900766000b853d81b8b0760c1ecee044b5c020aee612c3e5d68c71
Mar 29 17:10:27 old-k8s-version-469910 kubelet[662]: E0329 17:10:27.924876 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
Mar 29 17:10:28 old-k8s-version-469910 kubelet[662]: E0329 17:10:28.924927 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 29 17:10:39 old-k8s-version-469910 kubelet[662]: I0329 17:10:39.924159 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 14537e00fa900766000b853d81b8b0760c1ecee044b5c020aee612c3e5d68c71
Mar 29 17:10:39 old-k8s-version-469910 kubelet[662]: E0329 17:10:39.925004 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
Mar 29 17:10:40 old-k8s-version-469910 kubelet[662]: E0329 17:10:40.928277 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 29 17:10:53 old-k8s-version-469910 kubelet[662]: I0329 17:10:53.924043 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 14537e00fa900766000b853d81b8b0760c1ecee044b5c020aee612c3e5d68c71
Mar 29 17:10:53 old-k8s-version-469910 kubelet[662]: E0329 17:10:53.924385 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
Mar 29 17:10:55 old-k8s-version-469910 kubelet[662]: E0329 17:10:55.924880 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 29 17:11:05 old-k8s-version-469910 kubelet[662]: I0329 17:11:05.923967 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 14537e00fa900766000b853d81b8b0760c1ecee044b5c020aee612c3e5d68c71
Mar 29 17:11:05 old-k8s-version-469910 kubelet[662]: E0329 17:11:05.924316 662 pod_workers.go:191] Error syncing pod 566d544e-9098-46b4-8389-0662e034baf0 ("dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5v95z_kubernetes-dashboard(566d544e-9098-46b4-8389-0662e034baf0)"
Mar 29 17:11:08 old-k8s-version-469910 kubelet[662]: E0329 17:11:08.955749 662 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Mar 29 17:11:08 old-k8s-version-469910 kubelet[662]: E0329 17:11:08.955807 662 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Mar 29 17:11:08 old-k8s-version-469910 kubelet[662]: E0329 17:11:08.955939 662 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>} BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-fmlf6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-k7v6g_kube-system(59c600a
b-7b77-42ce-b028-906dbe9c84d1): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Mar 29 17:11:08 old-k8s-version-469910 kubelet[662]: E0329 17:11:08.955973 662 pod_workers.go:191] Error syncing pod 59c600ab-7b77-42ce-b028-906dbe9c84d1 ("metrics-server-9975d5f86-k7v6g_kube-system(59c600ab-7b77-42ce-b028-906dbe9c84d1)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
==> kubernetes-dashboard [9affe02cbdfd87007f6bb996c748c3449799a41720b75c90d252c62fcb927af2] <==
2025/03/29 17:05:46 Using namespace: kubernetes-dashboard
2025/03/29 17:05:46 Using in-cluster config to connect to apiserver
2025/03/29 17:05:46 Using secret token for csrf signing
2025/03/29 17:05:46 Initializing csrf token from kubernetes-dashboard-csrf secret
2025/03/29 17:05:47 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2025/03/29 17:05:47 Successful initial request to the apiserver, version: v1.20.0
2025/03/29 17:05:47 Generating JWE encryption key
2025/03/29 17:05:47 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2025/03/29 17:05:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2025/03/29 17:05:47 Initializing JWE encryption key from synchronized object
2025/03/29 17:05:47 Creating in-cluster Sidecar client
2025/03/29 17:05:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/03/29 17:05:47 Serving insecurely on HTTP port: 9090
2025/03/29 17:06:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/03/29 17:06:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/03/29 17:07:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/03/29 17:07:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/03/29 17:08:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/03/29 17:08:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/03/29 17:09:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/03/29 17:09:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/03/29 17:10:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/03/29 17:10:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/03/29 17:05:46 Starting overwatch
==> storage-provisioner [7170455291cf3cb3b0f76ac6cd41db4b1dae2328482597589a839fe7a7e8e9a2] <==
I0329 17:06:14.025046 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0329 17:06:14.040508 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0329 17:06:14.040676 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0329 17:06:31.486226 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0329 17:06:31.486668 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-469910_f00a7714-4bc5-4e57-9057-12404ab4e3d8!
I0329 17:06:31.488139 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7d0b20c2-3c4b-4e8a-a576-b1e5121d5199", APIVersion:"v1", ResourceVersion:"866", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-469910_f00a7714-4bc5-4e57-9057-12404ab4e3d8 became leader
I0329 17:06:31.587225 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-469910_f00a7714-4bc5-4e57-9057-12404ab4e3d8!
==> storage-provisioner [c332b5510a0e67142842621140541b4ab72255b20ae34d6867fef0ea4307b24b] <==
I0329 17:05:27.472896 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0329 17:05:57.475104 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-469910 -n old-k8s-version-469910
helpers_test.go:261: (dbg) Run: kubectl --context old-k8s-version-469910 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-k7v6g
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context old-k8s-version-469910 describe pod metrics-server-9975d5f86-k7v6g
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-469910 describe pod metrics-server-9975d5f86-k7v6g: exit status 1 (136.187159ms)
** stderr **
Error from server (NotFound): pods "metrics-server-9975d5f86-k7v6g" not found
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-469910 describe pod metrics-server-9975d5f86-k7v6g: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (374.44s)