=== RUN TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run: out/minikube-linux-arm64 start -p old-k8s-version-385687 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=docker --kubernetes-version=v1.20.0
E1212 00:16:35.618839 1439016 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-1433638/.minikube/profiles/functional-615762/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-385687 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=docker --kubernetes-version=v1.20.0: exit status 102 (6m17.05986065s)
-- stdout --
* [old-k8s-version-385687] minikube v1.34.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=20083
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/20083-1433638/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-1433638/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
* Using the docker driver based on existing profile
* Starting "old-k8s-version-385687" primary control-plane node in "old-k8s-version-385687" cluster
* Pulling base image v0.0.45-1733912881-20083 ...
* Restarting existing docker container for "old-k8s-version-385687" ...
* Preparing Kubernetes v1.20.0 on Docker 27.4.0 ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
- Using image docker.io/kubernetesui/dashboard:v2.7.0
- Using image registry.k8s.io/echoserver:1.4
- Using image fake.domain/registry.k8s.io/echoserver:1.4
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-385687 addons enable metrics-server
* Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
-- /stdout --
** stderr **
I1212 00:16:22.619825 1748736 out.go:345] Setting OutFile to fd 1 ...
I1212 00:16:22.621981 1748736 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1212 00:16:22.622038 1748736 out.go:358] Setting ErrFile to fd 2...
I1212 00:16:22.622061 1748736 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1212 00:16:22.622405 1748736 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-1433638/.minikube/bin
I1212 00:16:22.623017 1748736 out.go:352] Setting JSON to false
I1212 00:16:22.624286 1748736 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":25130,"bootTime":1733937453,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
I1212 00:16:22.624396 1748736 start.go:139] virtualization:
I1212 00:16:22.627998 1748736 out.go:177] * [old-k8s-version-385687] minikube v1.34.0 on Ubuntu 20.04 (arm64)
I1212 00:16:22.632518 1748736 out.go:177] - MINIKUBE_LOCATION=20083
I1212 00:16:22.632678 1748736 notify.go:220] Checking for updates...
I1212 00:16:22.639213 1748736 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1212 00:16:22.642415 1748736 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20083-1433638/kubeconfig
I1212 00:16:22.645561 1748736 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-1433638/.minikube
I1212 00:16:22.649236 1748736 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I1212 00:16:22.652157 1748736 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I1212 00:16:22.655713 1748736 config.go:182] Loaded profile config "old-k8s-version-385687": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
I1212 00:16:22.659390 1748736 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
I1212 00:16:22.662768 1748736 driver.go:394] Setting default libvirt URI to qemu:///system
I1212 00:16:22.702753 1748736 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
I1212 00:16:22.702876 1748736 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1212 00:16:22.777402 1748736 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:56 OomKillDisable:true NGoroutines:69 SystemTime:2024-12-12 00:16:22.763736664 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
I1212 00:16:22.777513 1748736 docker.go:318] overlay module found
I1212 00:16:22.781689 1748736 out.go:177] * Using the docker driver based on existing profile
I1212 00:16:22.784895 1748736 start.go:297] selected driver: docker
I1212 00:16:22.784924 1748736 start.go:901] validating driver "docker" against &{Name:old-k8s-version-385687 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-385687 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1212 00:16:22.785022 1748736 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1212 00:16:22.785729 1748736 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1212 00:16:22.872894 1748736 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:56 OomKillDisable:true NGoroutines:69 SystemTime:2024-12-12 00:16:22.863929772 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
I1212 00:16:22.873299 1748736 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1212 00:16:22.873327 1748736 cni.go:84] Creating CNI manager for ""
I1212 00:16:22.873375 1748736 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I1212 00:16:22.873421 1748736 start.go:340] cluster config:
{Name:old-k8s-version-385687 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-385687 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1212 00:16:22.876967 1748736 out.go:177] * Starting "old-k8s-version-385687" primary control-plane node in "old-k8s-version-385687" cluster
I1212 00:16:22.880023 1748736 cache.go:121] Beginning downloading kic base image for docker with docker
I1212 00:16:22.883132 1748736 out.go:177] * Pulling base image v0.0.45-1733912881-20083 ...
I1212 00:16:22.886124 1748736 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I1212 00:16:22.886191 1748736 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20083-1433638/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
I1212 00:16:22.886223 1748736 cache.go:56] Caching tarball of preloaded images
I1212 00:16:22.886229 1748736 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 in local docker daemon
I1212 00:16:22.886308 1748736 preload.go:172] Found /home/jenkins/minikube-integration/20083-1433638/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I1212 00:16:22.886317 1748736 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
I1212 00:16:22.886427 1748736 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-1433638/.minikube/profiles/old-k8s-version-385687/config.json ...
I1212 00:16:22.918003 1748736 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 in local docker daemon, skipping pull
I1212 00:16:22.918028 1748736 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 exists in daemon, skipping load
I1212 00:16:22.918044 1748736 cache.go:194] Successfully downloaded all kic artifacts
I1212 00:16:22.918067 1748736 start.go:360] acquireMachinesLock for old-k8s-version-385687: {Name:mk95b66452baef37e99c28285ea36394e3b60d49 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1212 00:16:22.918129 1748736 start.go:364] duration metric: took 40.213µs to acquireMachinesLock for "old-k8s-version-385687"
I1212 00:16:22.918163 1748736 start.go:96] Skipping create...Using existing machine configuration
I1212 00:16:22.918172 1748736 fix.go:54] fixHost starting:
I1212 00:16:22.918442 1748736 cli_runner.go:164] Run: docker container inspect old-k8s-version-385687 --format={{.State.Status}}
I1212 00:16:22.940972 1748736 fix.go:112] recreateIfNeeded on old-k8s-version-385687: state=Stopped err=<nil>
W1212 00:16:22.941006 1748736 fix.go:138] unexpected machine state, will restart: <nil>
I1212 00:16:22.944778 1748736 out.go:177] * Restarting existing docker container for "old-k8s-version-385687" ...
I1212 00:16:22.948076 1748736 cli_runner.go:164] Run: docker start old-k8s-version-385687
I1212 00:16:23.342213 1748736 cli_runner.go:164] Run: docker container inspect old-k8s-version-385687 --format={{.State.Status}}
I1212 00:16:23.377838 1748736 kic.go:430] container "old-k8s-version-385687" state is running.
I1212 00:16:23.378232 1748736 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-385687
I1212 00:16:23.405387 1748736 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-1433638/.minikube/profiles/old-k8s-version-385687/config.json ...
I1212 00:16:23.405623 1748736 machine.go:93] provisionDockerMachine start ...
I1212 00:16:23.405691 1748736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-385687
I1212 00:16:23.432850 1748736 main.go:141] libmachine: Using SSH client type: native
I1212 00:16:23.433107 1748736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil> [] 0s} 127.0.0.1 34605 <nil> <nil>}
I1212 00:16:23.433116 1748736 main.go:141] libmachine: About to run SSH command:
hostname
I1212 00:16:23.434806 1748736 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I1212 00:16:26.571357 1748736 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-385687
I1212 00:16:26.571384 1748736 ubuntu.go:169] provisioning hostname "old-k8s-version-385687"
I1212 00:16:26.571477 1748736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-385687
I1212 00:16:26.591851 1748736 main.go:141] libmachine: Using SSH client type: native
I1212 00:16:26.592118 1748736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil> [] 0s} 127.0.0.1 34605 <nil> <nil>}
I1212 00:16:26.592136 1748736 main.go:141] libmachine: About to run SSH command:
sudo hostname old-k8s-version-385687 && echo "old-k8s-version-385687" | sudo tee /etc/hostname
I1212 00:16:26.753302 1748736 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-385687
I1212 00:16:26.753395 1748736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-385687
I1212 00:16:26.771875 1748736 main.go:141] libmachine: Using SSH client type: native
I1212 00:16:26.772125 1748736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil> [] 0s} 127.0.0.1 34605 <nil> <nil>}
I1212 00:16:26.772150 1748736 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-385687' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-385687/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-385687' | sudo tee -a /etc/hosts;
fi
fi
I1212 00:16:26.911639 1748736 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1212 00:16:26.911671 1748736 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20083-1433638/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-1433638/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-1433638/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-1433638/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-1433638/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-1433638/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-1433638/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-1433638/.minikube}
I1212 00:16:26.911690 1748736 ubuntu.go:177] setting up certificates
I1212 00:16:26.911699 1748736 provision.go:84] configureAuth start
I1212 00:16:26.911758 1748736 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-385687
I1212 00:16:26.933308 1748736 provision.go:143] copyHostCerts
I1212 00:16:26.933378 1748736 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-1433638/.minikube/cert.pem, removing ...
I1212 00:16:26.933388 1748736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-1433638/.minikube/cert.pem
I1212 00:16:26.933463 1748736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-1433638/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-1433638/.minikube/cert.pem (1123 bytes)
I1212 00:16:26.933562 1748736 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-1433638/.minikube/key.pem, removing ...
I1212 00:16:26.933573 1748736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-1433638/.minikube/key.pem
I1212 00:16:26.933603 1748736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-1433638/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-1433638/.minikube/key.pem (1675 bytes)
I1212 00:16:26.933664 1748736 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-1433638/.minikube/ca.pem, removing ...
I1212 00:16:26.933673 1748736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-1433638/.minikube/ca.pem
I1212 00:16:26.933699 1748736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-1433638/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-1433638/.minikube/ca.pem (1082 bytes)
I1212 00:16:26.933751 1748736 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-1433638/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-1433638/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-1433638/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-385687 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-385687]
I1212 00:16:27.352715 1748736 provision.go:177] copyRemoteCerts
I1212 00:16:27.352798 1748736 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1212 00:16:27.352857 1748736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-385687
I1212 00:16:27.372123 1748736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34605 SSHKeyPath:/home/jenkins/minikube-integration/20083-1433638/.minikube/machines/old-k8s-version-385687/id_rsa Username:docker}
I1212 00:16:27.469561 1748736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-1433638/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1212 00:16:27.496797 1748736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-1433638/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I1212 00:16:27.523919 1748736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-1433638/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1212 00:16:27.551333 1748736 provision.go:87] duration metric: took 639.605514ms to configureAuth
I1212 00:16:27.551431 1748736 ubuntu.go:193] setting minikube options for container-runtime
I1212 00:16:27.551656 1748736 config.go:182] Loaded profile config "old-k8s-version-385687": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
I1212 00:16:27.551743 1748736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-385687
I1212 00:16:27.597795 1748736 main.go:141] libmachine: Using SSH client type: native
I1212 00:16:27.598040 1748736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil> [] 0s} 127.0.0.1 34605 <nil> <nil>}
I1212 00:16:27.598050 1748736 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1212 00:16:27.748365 1748736 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I1212 00:16:27.748442 1748736 ubuntu.go:71] root file system type: overlay
I1212 00:16:27.748603 1748736 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I1212 00:16:27.748714 1748736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-385687
I1212 00:16:27.769331 1748736 main.go:141] libmachine: Using SSH client type: native
I1212 00:16:27.769573 1748736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil> [] 0s} 127.0.0.1 34605 <nil> <nil>}
I1212 00:16:27.769653 1748736 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1212 00:16:27.926344 1748736 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I1212 00:16:27.926493 1748736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-385687
I1212 00:16:27.953103 1748736 main.go:141] libmachine: Using SSH client type: native
I1212 00:16:27.953344 1748736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil> [] 0s} 127.0.0.1 34605 <nil> <nil>}
I1212 00:16:27.953361 1748736 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1212 00:16:28.098671 1748736 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1212 00:16:28.098702 1748736 machine.go:96] duration metric: took 4.693059226s to provisionDockerMachine
I1212 00:16:28.098715 1748736 start.go:293] postStartSetup for "old-k8s-version-385687" (driver="docker")
I1212 00:16:28.098726 1748736 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1212 00:16:28.098798 1748736 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1212 00:16:28.098843 1748736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-385687
I1212 00:16:28.132072 1748736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34605 SSHKeyPath:/home/jenkins/minikube-integration/20083-1433638/.minikube/machines/old-k8s-version-385687/id_rsa Username:docker}
I1212 00:16:28.230306 1748736 ssh_runner.go:195] Run: cat /etc/os-release
I1212 00:16:28.234881 1748736 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1212 00:16:28.234917 1748736 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1212 00:16:28.234928 1748736 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1212 00:16:28.234935 1748736 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I1212 00:16:28.234947 1748736 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-1433638/.minikube/addons for local assets ...
I1212 00:16:28.235008 1748736 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-1433638/.minikube/files for local assets ...
I1212 00:16:28.235094 1748736 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-1433638/.minikube/files/etc/ssl/certs/14390162.pem -> 14390162.pem in /etc/ssl/certs
I1212 00:16:28.235210 1748736 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1212 00:16:28.246995 1748736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-1433638/.minikube/files/etc/ssl/certs/14390162.pem --> /etc/ssl/certs/14390162.pem (1708 bytes)
I1212 00:16:28.278511 1748736 start.go:296] duration metric: took 179.780978ms for postStartSetup
I1212 00:16:28.278733 1748736 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1212 00:16:28.278821 1748736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-385687
I1212 00:16:28.298064 1748736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34605 SSHKeyPath:/home/jenkins/minikube-integration/20083-1433638/.minikube/machines/old-k8s-version-385687/id_rsa Username:docker}
I1212 00:16:28.401174 1748736 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1212 00:16:28.408882 1748736 fix.go:56] duration metric: took 5.490700094s for fixHost
I1212 00:16:28.408905 1748736 start.go:83] releasing machines lock for "old-k8s-version-385687", held for 5.490762755s
I1212 00:16:28.408983 1748736 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-385687
I1212 00:16:28.431446 1748736 ssh_runner.go:195] Run: cat /version.json
I1212 00:16:28.431512 1748736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-385687
I1212 00:16:28.431797 1748736 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1212 00:16:28.431878 1748736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-385687
I1212 00:16:28.464484 1748736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34605 SSHKeyPath:/home/jenkins/minikube-integration/20083-1433638/.minikube/machines/old-k8s-version-385687/id_rsa Username:docker}
I1212 00:16:28.477852 1748736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34605 SSHKeyPath:/home/jenkins/minikube-integration/20083-1433638/.minikube/machines/old-k8s-version-385687/id_rsa Username:docker}
I1212 00:16:28.571495 1748736 ssh_runner.go:195] Run: systemctl --version
I1212 00:16:28.712470 1748736 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I1212 00:16:28.717195 1748736 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I1212 00:16:28.738497 1748736 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I1212 00:16:28.738646 1748736 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
I1212 00:16:28.759141 1748736 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
I1212 00:16:28.785830 1748736 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1212 00:16:28.785909 1748736 start.go:495] detecting cgroup driver to use...
I1212 00:16:28.785958 1748736 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1212 00:16:28.786096 1748736 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1212 00:16:28.804054 1748736 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I1212 00:16:28.816976 1748736 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1212 00:16:28.829037 1748736 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1212 00:16:28.829167 1748736 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1212 00:16:28.840603 1748736 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1212 00:16:28.850906 1748736 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1212 00:16:28.863556 1748736 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1212 00:16:28.875184 1748736 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1212 00:16:28.885515 1748736 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1212 00:16:28.896567 1748736 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1212 00:16:28.908041 1748736 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1212 00:16:28.919063 1748736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1212 00:16:29.124000 1748736 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1212 00:16:29.270974 1748736 start.go:495] detecting cgroup driver to use...
I1212 00:16:29.271091 1748736 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1212 00:16:29.271152 1748736 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1212 00:16:29.324744 1748736 cruntime.go:279] skipping containerd shutdown because we are bound to it
I1212 00:16:29.324819 1748736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1212 00:16:29.393197 1748736 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I1212 00:16:29.426882 1748736 ssh_runner.go:195] Run: which cri-dockerd
I1212 00:16:29.434740 1748736 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1212 00:16:29.450706 1748736 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I1212 00:16:29.476719 1748736 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1212 00:16:29.641485 1748736 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1212 00:16:29.799492 1748736 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I1212 00:16:29.799619 1748736 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I1212 00:16:29.830564 1748736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1212 00:16:30.076990 1748736 ssh_runner.go:195] Run: sudo systemctl restart docker
I1212 00:16:31.872862 1748736 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.795822895s)
I1212 00:16:31.872938 1748736 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1212 00:16:31.926867 1748736 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1212 00:16:31.958125 1748736 out.go:235] * Preparing Kubernetes v1.20.0 on Docker 27.4.0 ...
I1212 00:16:31.958229 1748736 cli_runner.go:164] Run: docker network inspect old-k8s-version-385687 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1212 00:16:31.987278 1748736 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I1212 00:16:31.991222 1748736 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1212 00:16:32.001624 1748736 kubeadm.go:883] updating cluster {Name:old-k8s-version-385687 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-385687 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkin
s:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1212 00:16:32.001743 1748736 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I1212 00:16:32.001800 1748736 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1212 00:16:32.029435 1748736 docker.go:689] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/kube-proxy:v1.20.0
registry.k8s.io/kube-proxy:v1.20.0
k8s.gcr.io/kube-apiserver:v1.20.0
registry.k8s.io/kube-apiserver:v1.20.0
k8s.gcr.io/kube-controller-manager:v1.20.0
registry.k8s.io/kube-controller-manager:v1.20.0
k8s.gcr.io/kube-scheduler:v1.20.0
registry.k8s.io/kube-scheduler:v1.20.0
k8s.gcr.io/etcd:3.4.13-0
registry.k8s.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
registry.k8s.io/coredns:1.7.0
k8s.gcr.io/pause:3.2
registry.k8s.io/pause:3.2
gcr.io/k8s-minikube/busybox:1.28.4-glibc
-- /stdout --
I1212 00:16:32.029456 1748736 docker.go:619] Images already preloaded, skipping extraction
I1212 00:16:32.029527 1748736 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1212 00:16:32.051973 1748736 docker.go:689] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/kube-proxy:v1.20.0
registry.k8s.io/kube-proxy:v1.20.0
k8s.gcr.io/kube-controller-manager:v1.20.0
registry.k8s.io/kube-controller-manager:v1.20.0
registry.k8s.io/kube-apiserver:v1.20.0
k8s.gcr.io/kube-apiserver:v1.20.0
registry.k8s.io/kube-scheduler:v1.20.0
k8s.gcr.io/kube-scheduler:v1.20.0
registry.k8s.io/etcd:3.4.13-0
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
registry.k8s.io/coredns:1.7.0
k8s.gcr.io/pause:3.2
registry.k8s.io/pause:3.2
gcr.io/k8s-minikube/busybox:1.28.4-glibc
-- /stdout --
I1212 00:16:32.051994 1748736 cache_images.go:84] Images are preloaded, skipping loading
I1212 00:16:32.052005 1748736 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 docker true true} ...
I1212 00:16:32.052119 1748736 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-385687 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-385687 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1212 00:16:32.052184 1748736 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I1212 00:16:32.125417 1748736 cni.go:84] Creating CNI manager for ""
I1212 00:16:32.125501 1748736 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I1212 00:16:32.125527 1748736 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I1212 00:16:32.125574 1748736 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-385687 NodeName:old-k8s-version-385687 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I1212 00:16:32.125790 1748736 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "old-k8s-version-385687"
kubeletExtraArgs:
node-ip: 192.168.76.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1212 00:16:32.125901 1748736 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I1212 00:16:32.135684 1748736 binaries.go:44] Found k8s binaries, skipping transfer
I1212 00:16:32.135757 1748736 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1212 00:16:32.144685 1748736 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
I1212 00:16:32.166397 1748736 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1212 00:16:32.187914 1748736 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2118 bytes)
I1212 00:16:32.207891 1748736 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I1212 00:16:32.211989 1748736 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1212 00:16:32.225958 1748736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1212 00:16:32.344755 1748736 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1212 00:16:32.360402 1748736 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-1433638/.minikube/profiles/old-k8s-version-385687 for IP: 192.168.76.2
I1212 00:16:32.360425 1748736 certs.go:194] generating shared ca certs ...
I1212 00:16:32.360442 1748736 certs.go:226] acquiring lock for ca certs: {Name:mk79f9e2f05bff5bb27ff07029e74e2d72f5e267 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 00:16:32.360659 1748736 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-1433638/.minikube/ca.key
I1212 00:16:32.360727 1748736 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-1433638/.minikube/proxy-client-ca.key
I1212 00:16:32.360741 1748736 certs.go:256] generating profile certs ...
I1212 00:16:32.360855 1748736 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-1433638/.minikube/profiles/old-k8s-version-385687/client.key
I1212 00:16:32.360941 1748736 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20083-1433638/.minikube/profiles/old-k8s-version-385687/apiserver.key.02217119
I1212 00:16:32.361006 1748736 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-1433638/.minikube/profiles/old-k8s-version-385687/proxy-client.key
I1212 00:16:32.361143 1748736 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-1433638/.minikube/certs/1439016.pem (1338 bytes)
W1212 00:16:32.361193 1748736 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-1433638/.minikube/certs/1439016_empty.pem, impossibly tiny 0 bytes
I1212 00:16:32.361209 1748736 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-1433638/.minikube/certs/ca-key.pem (1675 bytes)
I1212 00:16:32.361236 1748736 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-1433638/.minikube/certs/ca.pem (1082 bytes)
I1212 00:16:32.361281 1748736 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-1433638/.minikube/certs/cert.pem (1123 bytes)
I1212 00:16:32.361311 1748736 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-1433638/.minikube/certs/key.pem (1675 bytes)
I1212 00:16:32.361376 1748736 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-1433638/.minikube/files/etc/ssl/certs/14390162.pem (1708 bytes)
I1212 00:16:32.362080 1748736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-1433638/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1212 00:16:32.404081 1748736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-1433638/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1212 00:16:32.456908 1748736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-1433638/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1212 00:16:32.516371 1748736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-1433638/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1212 00:16:32.576598 1748736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-1433638/.minikube/profiles/old-k8s-version-385687/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
I1212 00:16:32.623455 1748736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-1433638/.minikube/profiles/old-k8s-version-385687/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1212 00:16:32.711405 1748736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-1433638/.minikube/profiles/old-k8s-version-385687/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1212 00:16:32.767123 1748736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-1433638/.minikube/profiles/old-k8s-version-385687/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1212 00:16:32.819307 1748736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-1433638/.minikube/certs/1439016.pem --> /usr/share/ca-certificates/1439016.pem (1338 bytes)
I1212 00:16:32.853367 1748736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-1433638/.minikube/files/etc/ssl/certs/14390162.pem --> /usr/share/ca-certificates/14390162.pem (1708 bytes)
I1212 00:16:32.883373 1748736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-1433638/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1212 00:16:32.911725 1748736 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1212 00:16:32.932330 1748736 ssh_runner.go:195] Run: openssl version
I1212 00:16:32.938201 1748736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14390162.pem && ln -fs /usr/share/ca-certificates/14390162.pem /etc/ssl/certs/14390162.pem"
I1212 00:16:32.949817 1748736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14390162.pem
I1212 00:16:32.954557 1748736 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:28 /usr/share/ca-certificates/14390162.pem
I1212 00:16:32.954628 1748736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14390162.pem
I1212 00:16:32.963035 1748736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14390162.pem /etc/ssl/certs/3ec20f2e.0"
I1212 00:16:32.977054 1748736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1212 00:16:32.987720 1748736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1212 00:16:32.991797 1748736 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:21 /usr/share/ca-certificates/minikubeCA.pem
I1212 00:16:32.991865 1748736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1212 00:16:33.000703 1748736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1212 00:16:33.013036 1748736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1439016.pem && ln -fs /usr/share/ca-certificates/1439016.pem /etc/ssl/certs/1439016.pem"
I1212 00:16:33.024810 1748736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1439016.pem
I1212 00:16:33.029343 1748736 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:28 /usr/share/ca-certificates/1439016.pem
I1212 00:16:33.029417 1748736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1439016.pem
I1212 00:16:33.037417 1748736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1439016.pem /etc/ssl/certs/51391683.0"
I1212 00:16:33.048481 1748736 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1212 00:16:33.053075 1748736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I1212 00:16:33.061152 1748736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I1212 00:16:33.073839 1748736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I1212 00:16:33.083807 1748736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I1212 00:16:33.092316 1748736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I1212 00:16:33.100122 1748736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I1212 00:16:33.107754 1748736 kubeadm.go:392] StartCluster: {Name:old-k8s-version-385687 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-385687 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1212 00:16:33.108011 1748736 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1212 00:16:33.129546 1748736 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1212 00:16:33.139683 1748736 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I1212 00:16:33.139757 1748736 kubeadm.go:593] restartPrimaryControlPlane start ...
I1212 00:16:33.139858 1748736 ssh_runner.go:195] Run: sudo test -d /data/minikube
I1212 00:16:33.149151 1748736 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I1212 00:16:33.149707 1748736 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-385687" does not appear in /home/jenkins/minikube-integration/20083-1433638/kubeconfig
I1212 00:16:33.149874 1748736 kubeconfig.go:62] /home/jenkins/minikube-integration/20083-1433638/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-385687" cluster setting kubeconfig missing "old-k8s-version-385687" context setting]
I1212 00:16:33.150230 1748736 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-1433638/kubeconfig: {Name:mke08f285bdf4a548eaaf91468b606aae00e57d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 00:16:33.152246 1748736 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I1212 00:16:33.162144 1748736 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
I1212 00:16:33.162228 1748736 kubeadm.go:597] duration metric: took 22.448884ms to restartPrimaryControlPlane
I1212 00:16:33.162252 1748736 kubeadm.go:394] duration metric: took 54.509923ms to StartCluster
I1212 00:16:33.162298 1748736 settings.go:142] acquiring lock: {Name:mkf3b256347f08c765a9bedb8db6d14ad0fbedd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 00:16:33.162389 1748736 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20083-1433638/kubeconfig
I1212 00:16:33.163117 1748736 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-1433638/kubeconfig: {Name:mke08f285bdf4a548eaaf91468b606aae00e57d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 00:16:33.163637 1748736 config.go:182] Loaded profile config "old-k8s-version-385687": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
I1212 00:16:33.163773 1748736 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I1212 00:16:33.163866 1748736 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-385687"
I1212 00:16:33.163905 1748736 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-385687"
W1212 00:16:33.163947 1748736 addons.go:243] addon storage-provisioner should already be in state true
I1212 00:16:33.163989 1748736 host.go:66] Checking if "old-k8s-version-385687" exists ...
I1212 00:16:33.164724 1748736 cli_runner.go:164] Run: docker container inspect old-k8s-version-385687 --format={{.State.Status}}
I1212 00:16:33.163732 1748736 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I1212 00:16:33.165346 1748736 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-385687"
I1212 00:16:33.165369 1748736 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-385687"
I1212 00:16:33.165649 1748736 cli_runner.go:164] Run: docker container inspect old-k8s-version-385687 --format={{.State.Status}}
I1212 00:16:33.166151 1748736 addons.go:69] Setting dashboard=true in profile "old-k8s-version-385687"
I1212 00:16:33.166193 1748736 addons.go:234] Setting addon dashboard=true in "old-k8s-version-385687"
W1212 00:16:33.166232 1748736 addons.go:243] addon dashboard should already be in state true
I1212 00:16:33.166277 1748736 host.go:66] Checking if "old-k8s-version-385687" exists ...
I1212 00:16:33.166790 1748736 cli_runner.go:164] Run: docker container inspect old-k8s-version-385687 --format={{.State.Status}}
I1212 00:16:33.170229 1748736 out.go:177] * Verifying Kubernetes components...
I1212 00:16:33.171542 1748736 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-385687"
I1212 00:16:33.171617 1748736 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-385687"
W1212 00:16:33.171695 1748736 addons.go:243] addon metrics-server should already be in state true
I1212 00:16:33.171806 1748736 host.go:66] Checking if "old-k8s-version-385687" exists ...
I1212 00:16:33.176730 1748736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1212 00:16:33.184797 1748736 cli_runner.go:164] Run: docker container inspect old-k8s-version-385687 --format={{.State.Status}}
I1212 00:16:33.222370 1748736 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1212 00:16:33.222485 1748736 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1212 00:16:33.225182 1748736 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1212 00:16:33.225204 1748736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1212 00:16:33.225270 1748736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-385687
I1212 00:16:33.228911 1748736 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I1212 00:16:33.232148 1748736 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1212 00:16:33.232174 1748736 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1212 00:16:33.232252 1748736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-385687
I1212 00:16:33.240165 1748736 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-385687"
W1212 00:16:33.240188 1748736 addons.go:243] addon default-storageclass should already be in state true
I1212 00:16:33.240215 1748736 host.go:66] Checking if "old-k8s-version-385687" exists ...
I1212 00:16:33.240621 1748736 cli_runner.go:164] Run: docker container inspect old-k8s-version-385687 --format={{.State.Status}}
I1212 00:16:33.270485 1748736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34605 SSHKeyPath:/home/jenkins/minikube-integration/20083-1433638/.minikube/machines/old-k8s-version-385687/id_rsa Username:docker}
I1212 00:16:33.280234 1748736 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I1212 00:16:33.283124 1748736 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1212 00:16:33.283146 1748736 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1212 00:16:33.283228 1748736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-385687
I1212 00:16:33.305665 1748736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34605 SSHKeyPath:/home/jenkins/minikube-integration/20083-1433638/.minikube/machines/old-k8s-version-385687/id_rsa Username:docker}
I1212 00:16:33.331529 1748736 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I1212 00:16:33.331513 1748736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34605 SSHKeyPath:/home/jenkins/minikube-integration/20083-1433638/.minikube/machines/old-k8s-version-385687/id_rsa Username:docker}
I1212 00:16:33.331552 1748736 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1212 00:16:33.331618 1748736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-385687
I1212 00:16:33.381207 1748736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34605 SSHKeyPath:/home/jenkins/minikube-integration/20083-1433638/.minikube/machines/old-k8s-version-385687/id_rsa Username:docker}
I1212 00:16:33.454173 1748736 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1212 00:16:33.505749 1748736 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-385687" to be "Ready" ...
I1212 00:16:33.518915 1748736 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1212 00:16:33.518995 1748736 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1212 00:16:33.567276 1748736 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1212 00:16:33.567344 1748736 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1212 00:16:33.570112 1748736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1212 00:16:33.573413 1748736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1212 00:16:33.653790 1748736 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1212 00:16:33.653863 1748736 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1212 00:16:33.683996 1748736 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1212 00:16:33.684061 1748736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I1212 00:16:33.775865 1748736 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1212 00:16:33.775933 1748736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I1212 00:16:33.806317 1748736 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1212 00:16:33.806384 1748736 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1212 00:16:33.859058 1748736 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
I1212 00:16:33.859139 1748736 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
W1212 00:16:33.863031 1748736 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:33.863104 1748736 retry.go:31] will retry after 219.461131ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1212 00:16:33.882886 1748736 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:33.882967 1748736 retry.go:31] will retry after 147.584281ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:33.885887 1748736 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1212 00:16:33.885962 1748736 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1212 00:16:33.904358 1748736 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1212 00:16:33.904429 1748736 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1212 00:16:33.923562 1748736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1212 00:16:33.985569 1748736 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1212 00:16:33.985650 1748736 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1212 00:16:34.031576 1748736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W1212 00:16:34.048147 1748736 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:34.048227 1748736 retry.go:31] will retry after 335.226612ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:34.082905 1748736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I1212 00:16:34.084363 1748736 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1212 00:16:34.084427 1748736 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1212 00:16:34.175552 1748736 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1212 00:16:34.175633 1748736 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
W1212 00:16:34.209505 1748736 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:34.209592 1748736 retry.go:31] will retry after 252.230728ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:34.274136 1748736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1212 00:16:34.311233 1748736 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:34.311327 1748736 retry.go:31] will retry after 375.653151ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:34.384610 1748736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W1212 00:16:34.403835 1748736 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:34.403922 1748736 retry.go:31] will retry after 286.018762ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:34.462189 1748736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W1212 00:16:34.505524 1748736 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:34.505617 1748736 retry.go:31] will retry after 234.975807ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1212 00:16:34.599844 1748736 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:34.599930 1748736 retry.go:31] will retry after 304.126729ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:34.687154 1748736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I1212 00:16:34.690576 1748736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1212 00:16:34.740828 1748736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W1212 00:16:34.867497 1748736 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:34.867595 1748736 retry.go:31] will retry after 722.785594ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:34.904845 1748736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W1212 00:16:34.927760 1748736 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:34.927860 1748736 retry.go:31] will retry after 391.005091ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1212 00:16:35.050468 1748736 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:35.050562 1748736 retry.go:31] will retry after 486.671996ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1212 00:16:35.097090 1748736 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:35.097166 1748736 retry.go:31] will retry after 816.319262ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:35.319662 1748736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1212 00:16:35.416834 1748736 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:35.416915 1748736 retry.go:31] will retry after 315.27571ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:35.506504 1748736 node_ready.go:53] error getting node "old-k8s-version-385687": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-385687": dial tcp 192.168.76.2:8443: connect: connection refused
I1212 00:16:35.537610 1748736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1212 00:16:35.591264 1748736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W1212 00:16:35.693727 1748736 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:35.693769 1748736 retry.go:31] will retry after 656.072021ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:35.733043 1748736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1212 00:16:35.786988 1748736 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:35.787032 1748736 retry.go:31] will retry after 588.013024ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1212 00:16:35.864077 1748736 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:35.864109 1748736 retry.go:31] will retry after 738.109352ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:35.914311 1748736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W1212 00:16:36.027311 1748736 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:36.027343 1748736 retry.go:31] will retry after 1.026602041s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:36.351041 1748736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1212 00:16:36.375918 1748736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W1212 00:16:36.543035 1748736 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:36.543071 1748736 retry.go:31] will retry after 945.977228ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:36.602427 1748736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1212 00:16:36.628638 1748736 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:36.628678 1748736 retry.go:31] will retry after 721.544868ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1212 00:16:36.869799 1748736 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:36.869830 1748736 retry.go:31] will retry after 1.163026732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:37.054842 1748736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W1212 00:16:37.298843 1748736 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:37.298872 1748736 retry.go:31] will retry after 2.184383563s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:37.351102 1748736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I1212 00:16:37.489759 1748736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W1212 00:16:37.535425 1748736 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:37.535464 1748736 retry.go:31] will retry after 2.456672537s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1212 00:16:37.664450 1748736 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:37.664481 1748736 retry.go:31] will retry after 1.617470483s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:38.007203 1748736 node_ready.go:53] error getting node "old-k8s-version-385687": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-385687": dial tcp 192.168.76.2:8443: connect: connection refused
I1212 00:16:38.033364 1748736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1212 00:16:38.351214 1748736 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:38.351265 1748736 retry.go:31] will retry after 2.597759057s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1212 00:16:39.282512 1748736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1212 00:16:39.483866 1748736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I1212 00:16:39.992617 1748736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I1212 00:16:40.949985 1748736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1212 00:16:49.006906 1748736 node_ready.go:53] error getting node "old-k8s-version-385687": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-385687": net/http: TLS handshake timeout
I1212 00:16:50.804584 1748736 node_ready.go:49] node "old-k8s-version-385687" has status "Ready":"True"
I1212 00:16:50.804658 1748736 node_ready.go:38] duration metric: took 17.298827916s for node "old-k8s-version-385687" to be "Ready" ...
I1212 00:16:50.804685 1748736 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1212 00:16:51.144365 1748736 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-w5mlv" in "kube-system" namespace to be "Ready" ...
I1212 00:16:51.187011 1748736 pod_ready.go:93] pod "coredns-74ff55c5b-w5mlv" in "kube-system" namespace has status "Ready":"True"
I1212 00:16:51.187093 1748736 pod_ready.go:82] duration metric: took 42.682875ms for pod "coredns-74ff55c5b-w5mlv" in "kube-system" namespace to be "Ready" ...
I1212 00:16:51.187121 1748736 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-385687" in "kube-system" namespace to be "Ready" ...
I1212 00:16:51.277457 1748736 pod_ready.go:93] pod "etcd-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"True"
I1212 00:16:51.277524 1748736 pod_ready.go:82] duration metric: took 90.38054ms for pod "etcd-old-k8s-version-385687" in "kube-system" namespace to be "Ready" ...
I1212 00:16:51.277554 1748736 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-385687" in "kube-system" namespace to be "Ready" ...
I1212 00:16:53.292203 1748736 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:16:54.637041 1748736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (15.35448267s)
I1212 00:16:54.637123 1748736 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-385687"
I1212 00:16:54.637211 1748736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (15.153307607s)
I1212 00:16:54.637265 1748736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (14.644615819s)
I1212 00:16:54.943956 1748736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (13.993899382s)
I1212 00:16:54.947302 1748736 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-385687 addons enable metrics-server
I1212 00:16:54.950716 1748736 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
I1212 00:16:54.953732 1748736 addons.go:510] duration metric: took 21.789954378s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
I1212 00:16:55.307344 1748736 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:16:57.783811 1748736 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:16:59.784084 1748736 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"True"
I1212 00:16:59.784150 1748736 pod_ready.go:82] duration metric: took 8.506575314s for pod "kube-apiserver-old-k8s-version-385687" in "kube-system" namespace to be "Ready" ...
I1212 00:16:59.784177 1748736 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace to be "Ready" ...
I1212 00:17:01.792869 1748736 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:17:04.291885 1748736 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:17:06.791194 1748736 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:17:09.291557 1748736 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:17:11.791339 1748736 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:17:14.291336 1748736 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:17:16.790225 1748736 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:17:18.791514 1748736 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:17:21.290589 1748736 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:17:23.790017 1748736 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:17:25.791284 1748736 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:17:27.791685 1748736 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:17:30.361504 1748736 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:17:32.791624 1748736 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:17:35.292823 1748736 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:17:37.790786 1748736 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:17:40.318953 1748736 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:17:42.791320 1748736 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:17:45.292696 1748736 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:17:47.293309 1748736 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:17:49.306397 1748736 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:17:51.790497 1748736 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:17:53.791196 1748736 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:17:56.289997 1748736 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:17:58.290930 1748736 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:00.297100 1748736 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:02.790391 1748736 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:04.796196 1748736 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:07.290504 1748736 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:09.293464 1748736 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:11.790980 1748736 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:13.291655 1748736 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"True"
I1212 00:18:13.291678 1748736 pod_ready.go:82] duration metric: took 1m13.507479824s for pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace to be "Ready" ...
I1212 00:18:13.291690 1748736 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dg295" in "kube-system" namespace to be "Ready" ...
I1212 00:18:13.300037 1748736 pod_ready.go:93] pod "kube-proxy-dg295" in "kube-system" namespace has status "Ready":"True"
I1212 00:18:13.300058 1748736 pod_ready.go:82] duration metric: took 8.360272ms for pod "kube-proxy-dg295" in "kube-system" namespace to be "Ready" ...
I1212 00:18:13.300070 1748736 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-385687" in "kube-system" namespace to be "Ready" ...
I1212 00:18:15.337504 1748736 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:17.307937 1748736 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"True"
I1212 00:18:17.308009 1748736 pod_ready.go:82] duration metric: took 4.007922555s for pod "kube-scheduler-old-k8s-version-385687" in "kube-system" namespace to be "Ready" ...
I1212 00:18:17.308036 1748736 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace to be "Ready" ...
I1212 00:18:19.315696 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:21.316872 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:23.317101 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:25.815496 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:28.330359 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:30.888488 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:33.314308 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:35.315121 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:37.814691 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:40.314898 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:42.814239 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:44.814756 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:47.313566 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:49.314288 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:51.813227 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:53.813950 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:55.815286 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:58.314520 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:00.315890 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:02.813648 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:05.320058 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:07.814531 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:10.316881 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:12.814679 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:14.814973 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:17.314816 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:19.315313 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:21.814583 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:24.314628 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:26.315360 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:28.815167 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:31.360816 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:33.815121 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:36.315499 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:38.815071 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:41.314791 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:43.314867 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:45.813647 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:47.814202 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:50.317009 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:52.814237 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:55.313952 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:57.315872 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:59.321150 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:01.817633 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:04.314614 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:06.821447 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:09.314885 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:11.814720 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:13.814880 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:16.314266 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:18.314400 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:20.814247 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:22.814475 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:25.315507 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:27.813762 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:30.315217 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:32.814740 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:34.815055 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:37.313855 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:39.315097 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:41.320773 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:43.813880 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:45.821173 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:48.314945 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:50.815712 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:53.314221 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:55.315459 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:57.315797 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:59.814554 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:01.815293 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:04.314591 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:06.814131 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:08.814484 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:10.816760 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:13.314339 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:15.814531 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:17.814737 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:20.314274 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:22.316324 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:24.814451 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:26.814788 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:29.313629 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:31.315865 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:33.814444 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:36.314735 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:38.814252 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:40.815500 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:42.817125 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:45.315266 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:47.814726 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:50.314778 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:52.315226 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:54.818206 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:57.314796 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:59.315259 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:22:01.315401 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:22:03.813742 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:22:05.814061 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:22:07.814993 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:22:10.314245 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:22:12.315904 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:22:14.815568 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:22:17.319230 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:22:17.319259 1748736 pod_ready.go:82] duration metric: took 4m0.011200239s for pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace to be "Ready" ...
E1212 00:22:17.319271 1748736 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I1212 00:22:17.319279 1748736 pod_ready.go:39] duration metric: took 5m26.514570873s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1212 00:22:17.319300 1748736 api_server.go:52] waiting for apiserver process to appear ...
I1212 00:22:17.319382 1748736 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1212 00:22:17.341230 1748736 logs.go:282] 2 containers: [b01ba427f07f d8c189ec5293]
I1212 00:22:17.341389 1748736 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1212 00:22:17.361460 1748736 logs.go:282] 2 containers: [c87014549ad3 3339d1ea608b]
I1212 00:22:17.361585 1748736 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1212 00:22:17.387326 1748736 logs.go:282] 2 containers: [26c238c9510b c1eb50b61731]
I1212 00:22:17.387524 1748736 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1212 00:22:17.409542 1748736 logs.go:282] 2 containers: [4fb87ba7f570 e1f79e78d53d]
I1212 00:22:17.409637 1748736 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1212 00:22:17.429723 1748736 logs.go:282] 2 containers: [0bb2da207c94 d2ad7ae21ec1]
I1212 00:22:17.429824 1748736 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1212 00:22:17.449540 1748736 logs.go:282] 2 containers: [d5db19c26736 b9c1c621b89a]
I1212 00:22:17.449626 1748736 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I1212 00:22:17.467290 1748736 logs.go:282] 0 containers: []
W1212 00:22:17.467314 1748736 logs.go:284] No container was found matching "kindnet"
I1212 00:22:17.467372 1748736 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I1212 00:22:17.489543 1748736 logs.go:282] 1 containers: [eb5d61e0470f]
I1212 00:22:17.489625 1748736 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I1212 00:22:17.509704 1748736 logs.go:282] 2 containers: [5567aacfd876 f96d450a47e8]
I1212 00:22:17.509738 1748736 logs.go:123] Gathering logs for kube-scheduler [e1f79e78d53d] ...
I1212 00:22:17.509750 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1f79e78d53d"
I1212 00:22:17.545642 1748736 logs.go:123] Gathering logs for kube-controller-manager [b9c1c621b89a] ...
I1212 00:22:17.545676 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9c1c621b89a"
I1212 00:22:17.588900 1748736 logs.go:123] Gathering logs for storage-provisioner [5567aacfd876] ...
I1212 00:22:17.588940 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5567aacfd876"
I1212 00:22:17.625450 1748736 logs.go:123] Gathering logs for dmesg ...
I1212 00:22:17.625527 1748736 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1212 00:22:17.659538 1748736 logs.go:123] Gathering logs for describe nodes ...
I1212 00:22:17.659616 1748736 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1212 00:22:17.848576 1748736 logs.go:123] Gathering logs for etcd [c87014549ad3] ...
I1212 00:22:17.848608 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87014549ad3"
I1212 00:22:17.879161 1748736 logs.go:123] Gathering logs for container status ...
I1212 00:22:17.879192 1748736 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1212 00:22:17.940589 1748736 logs.go:123] Gathering logs for kube-scheduler [4fb87ba7f570] ...
I1212 00:22:17.940618 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fb87ba7f570"
I1212 00:22:17.970379 1748736 logs.go:123] Gathering logs for kube-controller-manager [d5db19c26736] ...
I1212 00:22:17.970411 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5db19c26736"
I1212 00:22:18.016189 1748736 logs.go:123] Gathering logs for storage-provisioner [f96d450a47e8] ...
I1212 00:22:18.016230 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f96d450a47e8"
I1212 00:22:18.041594 1748736 logs.go:123] Gathering logs for coredns [c1eb50b61731] ...
I1212 00:22:18.041626 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1eb50b61731"
I1212 00:22:18.071309 1748736 logs.go:123] Gathering logs for kube-proxy [0bb2da207c94] ...
I1212 00:22:18.071340 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bb2da207c94"
I1212 00:22:18.098938 1748736 logs.go:123] Gathering logs for kubernetes-dashboard [eb5d61e0470f] ...
I1212 00:22:18.098969 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb5d61e0470f"
I1212 00:22:18.131454 1748736 logs.go:123] Gathering logs for Docker ...
I1212 00:22:18.131482 1748736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I1212 00:22:18.166794 1748736 logs.go:123] Gathering logs for kube-apiserver [d8c189ec5293] ...
I1212 00:22:18.166836 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8c189ec5293"
I1212 00:22:18.243162 1748736 logs.go:123] Gathering logs for etcd [3339d1ea608b] ...
I1212 00:22:18.243196 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3339d1ea608b"
I1212 00:22:18.280133 1748736 logs.go:123] Gathering logs for coredns [26c238c9510b] ...
I1212 00:22:18.280164 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26c238c9510b"
I1212 00:22:18.302243 1748736 logs.go:123] Gathering logs for kubelet ...
I1212 00:22:18.302276 1748736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1212 00:22:18.359731 1748736 logs.go:138] Found kubelet problem: Dec 12 00:16:50 old-k8s-version-385687 kubelet[1416]: E1212 00:16:50.375774 1416 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-385687" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-385687' and this object
W1212 00:22:18.360000 1748736 logs.go:138] Found kubelet problem: Dec 12 00:16:50 old-k8s-version-385687 kubelet[1416]: E1212 00:16:50.397585 1416 reflector.go:138] object-"kube-system"/"coredns-token-rrgdv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-rrgdv" is forbidden: User "system:node:old-k8s-version-385687" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-385687' and this object
W1212 00:22:18.366594 1748736 logs.go:138] Found kubelet problem: Dec 12 00:16:54 old-k8s-version-385687 kubelet[1416]: E1212 00:16:54.000437 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1212 00:22:18.367637 1748736 logs.go:138] Found kubelet problem: Dec 12 00:16:55 old-k8s-version-385687 kubelet[1416]: E1212 00:16:55.428334 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.368523 1748736 logs.go:138] Found kubelet problem: Dec 12 00:16:56 old-k8s-version-385687 kubelet[1416]: E1212 00:16:56.498264 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.370228 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:10 old-k8s-version-385687 kubelet[1416]: E1212 00:17:10.831853 1416 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-n4jdg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-n4jdg" is forbidden: User "system:node:old-k8s-version-385687" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-385687' and this object
W1212 00:22:18.372440 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:11 old-k8s-version-385687 kubelet[1416]: E1212 00:17:11.272479 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1212 00:22:18.377156 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:19 old-k8s-version-385687 kubelet[1416]: E1212 00:17:19.217024 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W1212 00:22:18.377545 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:19 old-k8s-version-385687 kubelet[1416]: E1212 00:17:19.713310 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.377855 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:24 old-k8s-version-385687 kubelet[1416]: E1212 00:17:24.771806 1416 pod_workers.go:191] Error syncing pod c35bb048-c903-47f6-8458-82dac2ca6358 ("storage-provisioner_kube-system(c35bb048-c903-47f6-8458-82dac2ca6358)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c35bb048-c903-47f6-8458-82dac2ca6358)"
W1212 00:22:18.378170 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:25 old-k8s-version-385687 kubelet[1416]: E1212 00:17:25.254131 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.380789 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:35 old-k8s-version-385687 kubelet[1416]: E1212 00:17:35.918222 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W1212 00:22:18.382990 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:40 old-k8s-version-385687 kubelet[1416]: E1212 00:17:40.315954 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1212 00:22:18.383191 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:49 old-k8s-version-385687 kubelet[1416]: E1212 00:17:49.254240 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.383376 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:55 old-k8s-version-385687 kubelet[1416]: E1212 00:17:55.260828 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.385773 1748736 logs.go:138] Found kubelet problem: Dec 12 00:18:03 old-k8s-version-385687 kubelet[1416]: E1212 00:18:03.816532 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W1212 00:22:18.385967 1748736 logs.go:138] Found kubelet problem: Dec 12 00:18:07 old-k8s-version-385687 kubelet[1416]: E1212 00:18:07.253787 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.386167 1748736 logs.go:138] Found kubelet problem: Dec 12 00:18:15 old-k8s-version-385687 kubelet[1416]: E1212 00:18:15.290346 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.388259 1748736 logs.go:138] Found kubelet problem: Dec 12 00:18:21 old-k8s-version-385687 kubelet[1416]: E1212 00:18:21.284205 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1212 00:22:18.388460 1748736 logs.go:138] Found kubelet problem: Dec 12 00:18:28 old-k8s-version-385687 kubelet[1416]: E1212 00:18:28.254710 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.388649 1748736 logs.go:138] Found kubelet problem: Dec 12 00:18:34 old-k8s-version-385687 kubelet[1416]: E1212 00:18:34.260169 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.388851 1748736 logs.go:138] Found kubelet problem: Dec 12 00:18:40 old-k8s-version-385687 kubelet[1416]: E1212 00:18:40.256316 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.389044 1748736 logs.go:138] Found kubelet problem: Dec 12 00:18:47 old-k8s-version-385687 kubelet[1416]: E1212 00:18:47.254188 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.391325 1748736 logs.go:138] Found kubelet problem: Dec 12 00:18:51 old-k8s-version-385687 kubelet[1416]: E1212 00:18:51.919190 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W1212 00:22:18.391559 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:01 old-k8s-version-385687 kubelet[1416]: E1212 00:19:01.253957 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.391760 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:05 old-k8s-version-385687 kubelet[1416]: E1212 00:19:05.254139 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.391944 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:15 old-k8s-version-385687 kubelet[1416]: E1212 00:19:15.254932 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.392160 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:18 old-k8s-version-385687 kubelet[1416]: E1212 00:19:18.268917 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.392344 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:26 old-k8s-version-385687 kubelet[1416]: E1212 00:19:26.254839 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.392541 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:32 old-k8s-version-385687 kubelet[1416]: E1212 00:19:32.255755 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.392726 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:40 old-k8s-version-385687 kubelet[1416]: E1212 00:19:40.257318 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.392923 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:44 old-k8s-version-385687 kubelet[1416]: E1212 00:19:44.254431 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.395023 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:52 old-k8s-version-385687 kubelet[1416]: E1212 00:19:52.286229 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1212 00:22:18.395225 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:56 old-k8s-version-385687 kubelet[1416]: E1212 00:19:56.256770 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.395429 1748736 logs.go:138] Found kubelet problem: Dec 12 00:20:05 old-k8s-version-385687 kubelet[1416]: E1212 00:20:05.253993 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.395626 1748736 logs.go:138] Found kubelet problem: Dec 12 00:20:11 old-k8s-version-385687 kubelet[1416]: E1212 00:20:11.253831 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.395814 1748736 logs.go:138] Found kubelet problem: Dec 12 00:20:16 old-k8s-version-385687 kubelet[1416]: E1212 00:20:16.258580 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.398050 1748736 logs.go:138] Found kubelet problem: Dec 12 00:20:24 old-k8s-version-385687 kubelet[1416]: E1212 00:20:24.840888 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W1212 00:22:18.398238 1748736 logs.go:138] Found kubelet problem: Dec 12 00:20:28 old-k8s-version-385687 kubelet[1416]: E1212 00:20:28.253767 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.398435 1748736 logs.go:138] Found kubelet problem: Dec 12 00:20:39 old-k8s-version-385687 kubelet[1416]: E1212 00:20:39.254296 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.398620 1748736 logs.go:138] Found kubelet problem: Dec 12 00:20:39 old-k8s-version-385687 kubelet[1416]: E1212 00:20:39.255523 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.398820 1748736 logs.go:138] Found kubelet problem: Dec 12 00:20:50 old-k8s-version-385687 kubelet[1416]: E1212 00:20:50.260236 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.399005 1748736 logs.go:138] Found kubelet problem: Dec 12 00:20:54 old-k8s-version-385687 kubelet[1416]: E1212 00:20:54.253607 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.399206 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:04 old-k8s-version-385687 kubelet[1416]: E1212 00:21:04.253856 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.399391 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:09 old-k8s-version-385687 kubelet[1416]: E1212 00:21:09.253849 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.399596 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:17 old-k8s-version-385687 kubelet[1416]: E1212 00:21:17.254163 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.399784 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:20 old-k8s-version-385687 kubelet[1416]: E1212 00:21:20.254162 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.399982 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:28 old-k8s-version-385687 kubelet[1416]: E1212 00:21:28.262765 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.400168 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:34 old-k8s-version-385687 kubelet[1416]: E1212 00:21:34.253963 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.400366 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:41 old-k8s-version-385687 kubelet[1416]: E1212 00:21:41.253839 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.400552 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:46 old-k8s-version-385687 kubelet[1416]: E1212 00:21:46.258966 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.400750 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:52 old-k8s-version-385687 kubelet[1416]: E1212 00:21:52.254181 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.400935 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:59 old-k8s-version-385687 kubelet[1416]: E1212 00:21:59.253931 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.401132 1748736 logs.go:138] Found kubelet problem: Dec 12 00:22:05 old-k8s-version-385687 kubelet[1416]: E1212 00:22:05.253864 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.401316 1748736 logs.go:138] Found kubelet problem: Dec 12 00:22:13 old-k8s-version-385687 kubelet[1416]: E1212 00:22:13.253794 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1212 00:22:18.401326 1748736 logs.go:123] Gathering logs for kube-apiserver [b01ba427f07f] ...
I1212 00:22:18.401340 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b01ba427f07f"
I1212 00:22:18.463548 1748736 logs.go:123] Gathering logs for kube-proxy [d2ad7ae21ec1] ...
I1212 00:22:18.463583 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2ad7ae21ec1"
I1212 00:22:18.489453 1748736 out.go:358] Setting ErrFile to fd 2...
I1212 00:22:18.489480 1748736 out.go:392] TERM=,COLORTERM=, which probably does not support color
W1212 00:22:18.489562 1748736 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W1212 00:22:18.489576 1748736 out.go:270] Dec 12 00:21:46 old-k8s-version-385687 kubelet[1416]: E1212 00:21:46.258966 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 12 00:21:46 old-k8s-version-385687 kubelet[1416]: E1212 00:21:46.258966 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.489597 1748736 out.go:270] Dec 12 00:21:52 old-k8s-version-385687 kubelet[1416]: E1212 00:21:52.254181 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Dec 12 00:21:52 old-k8s-version-385687 kubelet[1416]: E1212 00:21:52.254181 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.489622 1748736 out.go:270] Dec 12 00:21:59 old-k8s-version-385687 kubelet[1416]: E1212 00:21:59.253931 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 12 00:21:59 old-k8s-version-385687 kubelet[1416]: E1212 00:21:59.253931 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.489641 1748736 out.go:270] Dec 12 00:22:05 old-k8s-version-385687 kubelet[1416]: E1212 00:22:05.253864 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Dec 12 00:22:05 old-k8s-version-385687 kubelet[1416]: E1212 00:22:05.253864 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.489663 1748736 out.go:270] Dec 12 00:22:13 old-k8s-version-385687 kubelet[1416]: E1212 00:22:13.253794 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 12 00:22:13 old-k8s-version-385687 kubelet[1416]: E1212 00:22:13.253794 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1212 00:22:18.489669 1748736 out.go:358] Setting ErrFile to fd 2...
I1212 00:22:18.489682 1748736 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1212 00:22:28.491369 1748736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1212 00:22:28.504527 1748736 api_server.go:72] duration metric: took 5m55.339563551s to wait for apiserver process to appear ...
I1212 00:22:28.504579 1748736 api_server.go:88] waiting for apiserver healthz status ...
I1212 00:22:28.504658 1748736 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1212 00:22:28.524975 1748736 logs.go:282] 2 containers: [b01ba427f07f d8c189ec5293]
I1212 00:22:28.525057 1748736 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1212 00:22:28.544779 1748736 logs.go:282] 2 containers: [c87014549ad3 3339d1ea608b]
I1212 00:22:28.544877 1748736 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1212 00:22:28.565064 1748736 logs.go:282] 2 containers: [26c238c9510b c1eb50b61731]
I1212 00:22:28.565150 1748736 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1212 00:22:28.585526 1748736 logs.go:282] 2 containers: [4fb87ba7f570 e1f79e78d53d]
I1212 00:22:28.585614 1748736 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1212 00:22:28.605266 1748736 logs.go:282] 2 containers: [0bb2da207c94 d2ad7ae21ec1]
I1212 00:22:28.605350 1748736 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1212 00:22:28.631922 1748736 logs.go:282] 2 containers: [d5db19c26736 b9c1c621b89a]
I1212 00:22:28.632009 1748736 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I1212 00:22:28.653619 1748736 logs.go:282] 0 containers: []
W1212 00:22:28.653643 1748736 logs.go:284] No container was found matching "kindnet"
I1212 00:22:28.653706 1748736 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I1212 00:22:28.673901 1748736 logs.go:282] 2 containers: [5567aacfd876 f96d450a47e8]
I1212 00:22:28.673984 1748736 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I1212 00:22:28.701072 1748736 logs.go:282] 1 containers: [eb5d61e0470f]
I1212 00:22:28.701103 1748736 logs.go:123] Gathering logs for dmesg ...
I1212 00:22:28.701116 1748736 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1212 00:22:28.726410 1748736 logs.go:123] Gathering logs for etcd [3339d1ea608b] ...
I1212 00:22:28.726438 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3339d1ea608b"
I1212 00:22:28.760916 1748736 logs.go:123] Gathering logs for kube-controller-manager [d5db19c26736] ...
I1212 00:22:28.760954 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5db19c26736"
I1212 00:22:28.827751 1748736 logs.go:123] Gathering logs for kubernetes-dashboard [eb5d61e0470f] ...
I1212 00:22:28.827784 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb5d61e0470f"
I1212 00:22:28.851303 1748736 logs.go:123] Gathering logs for kubelet ...
I1212 00:22:28.851332 1748736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1212 00:22:28.920803 1748736 logs.go:138] Found kubelet problem: Dec 12 00:16:50 old-k8s-version-385687 kubelet[1416]: E1212 00:16:50.375774 1416 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-385687" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-385687' and this object
W1212 00:22:28.921062 1748736 logs.go:138] Found kubelet problem: Dec 12 00:16:50 old-k8s-version-385687 kubelet[1416]: E1212 00:16:50.397585 1416 reflector.go:138] object-"kube-system"/"coredns-token-rrgdv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-rrgdv" is forbidden: User "system:node:old-k8s-version-385687" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-385687' and this object
W1212 00:22:28.927356 1748736 logs.go:138] Found kubelet problem: Dec 12 00:16:54 old-k8s-version-385687 kubelet[1416]: E1212 00:16:54.000437 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1212 00:22:28.928386 1748736 logs.go:138] Found kubelet problem: Dec 12 00:16:55 old-k8s-version-385687 kubelet[1416]: E1212 00:16:55.428334 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.929224 1748736 logs.go:138] Found kubelet problem: Dec 12 00:16:56 old-k8s-version-385687 kubelet[1416]: E1212 00:16:56.498264 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.930856 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:10 old-k8s-version-385687 kubelet[1416]: E1212 00:17:10.831853 1416 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-n4jdg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-n4jdg" is forbidden: User "system:node:old-k8s-version-385687" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-385687' and this object
W1212 00:22:28.933037 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:11 old-k8s-version-385687 kubelet[1416]: E1212 00:17:11.272479 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1212 00:22:28.937597 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:19 old-k8s-version-385687 kubelet[1416]: E1212 00:17:19.217024 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W1212 00:22:28.937974 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:19 old-k8s-version-385687 kubelet[1416]: E1212 00:17:19.713310 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.938283 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:24 old-k8s-version-385687 kubelet[1416]: E1212 00:17:24.771806 1416 pod_workers.go:191] Error syncing pod c35bb048-c903-47f6-8458-82dac2ca6358 ("storage-provisioner_kube-system(c35bb048-c903-47f6-8458-82dac2ca6358)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c35bb048-c903-47f6-8458-82dac2ca6358)"
W1212 00:22:28.938597 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:25 old-k8s-version-385687 kubelet[1416]: E1212 00:17:25.254131 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.941196 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:35 old-k8s-version-385687 kubelet[1416]: E1212 00:17:35.918222 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W1212 00:22:28.943458 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:40 old-k8s-version-385687 kubelet[1416]: E1212 00:17:40.315954 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1212 00:22:28.943664 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:49 old-k8s-version-385687 kubelet[1416]: E1212 00:17:49.254240 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.943853 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:55 old-k8s-version-385687 kubelet[1416]: E1212 00:17:55.260828 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.946074 1748736 logs.go:138] Found kubelet problem: Dec 12 00:18:03 old-k8s-version-385687 kubelet[1416]: E1212 00:18:03.816532 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W1212 00:22:28.946258 1748736 logs.go:138] Found kubelet problem: Dec 12 00:18:07 old-k8s-version-385687 kubelet[1416]: E1212 00:18:07.253787 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.946457 1748736 logs.go:138] Found kubelet problem: Dec 12 00:18:15 old-k8s-version-385687 kubelet[1416]: E1212 00:18:15.290346 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.948521 1748736 logs.go:138] Found kubelet problem: Dec 12 00:18:21 old-k8s-version-385687 kubelet[1416]: E1212 00:18:21.284205 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1212 00:22:28.948723 1748736 logs.go:138] Found kubelet problem: Dec 12 00:18:28 old-k8s-version-385687 kubelet[1416]: E1212 00:18:28.254710 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.948910 1748736 logs.go:138] Found kubelet problem: Dec 12 00:18:34 old-k8s-version-385687 kubelet[1416]: E1212 00:18:34.260169 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.949108 1748736 logs.go:138] Found kubelet problem: Dec 12 00:18:40 old-k8s-version-385687 kubelet[1416]: E1212 00:18:40.256316 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.949292 1748736 logs.go:138] Found kubelet problem: Dec 12 00:18:47 old-k8s-version-385687 kubelet[1416]: E1212 00:18:47.254188 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.951520 1748736 logs.go:138] Found kubelet problem: Dec 12 00:18:51 old-k8s-version-385687 kubelet[1416]: E1212 00:18:51.919190 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W1212 00:22:28.951708 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:01 old-k8s-version-385687 kubelet[1416]: E1212 00:19:01.253957 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.951904 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:05 old-k8s-version-385687 kubelet[1416]: E1212 00:19:05.254139 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.952088 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:15 old-k8s-version-385687 kubelet[1416]: E1212 00:19:15.254932 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.952285 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:18 old-k8s-version-385687 kubelet[1416]: E1212 00:19:18.268917 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.952470 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:26 old-k8s-version-385687 kubelet[1416]: E1212 00:19:26.254839 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.952666 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:32 old-k8s-version-385687 kubelet[1416]: E1212 00:19:32.255755 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.952849 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:40 old-k8s-version-385687 kubelet[1416]: E1212 00:19:40.257318 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.953045 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:44 old-k8s-version-385687 kubelet[1416]: E1212 00:19:44.254431 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.955100 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:52 old-k8s-version-385687 kubelet[1416]: E1212 00:19:52.286229 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1212 00:22:28.955296 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:56 old-k8s-version-385687 kubelet[1416]: E1212 00:19:56.256770 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.955507 1748736 logs.go:138] Found kubelet problem: Dec 12 00:20:05 old-k8s-version-385687 kubelet[1416]: E1212 00:20:05.253993 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.955711 1748736 logs.go:138] Found kubelet problem: Dec 12 00:20:11 old-k8s-version-385687 kubelet[1416]: E1212 00:20:11.253831 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.955901 1748736 logs.go:138] Found kubelet problem: Dec 12 00:20:16 old-k8s-version-385687 kubelet[1416]: E1212 00:20:16.258580 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.958141 1748736 logs.go:138] Found kubelet problem: Dec 12 00:20:24 old-k8s-version-385687 kubelet[1416]: E1212 00:20:24.840888 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W1212 00:22:28.958327 1748736 logs.go:138] Found kubelet problem: Dec 12 00:20:28 old-k8s-version-385687 kubelet[1416]: E1212 00:20:28.253767 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.958541 1748736 logs.go:138] Found kubelet problem: Dec 12 00:20:39 old-k8s-version-385687 kubelet[1416]: E1212 00:20:39.254296 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.958730 1748736 logs.go:138] Found kubelet problem: Dec 12 00:20:39 old-k8s-version-385687 kubelet[1416]: E1212 00:20:39.255523 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.958934 1748736 logs.go:138] Found kubelet problem: Dec 12 00:20:50 old-k8s-version-385687 kubelet[1416]: E1212 00:20:50.260236 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.959119 1748736 logs.go:138] Found kubelet problem: Dec 12 00:20:54 old-k8s-version-385687 kubelet[1416]: E1212 00:20:54.253607 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.959315 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:04 old-k8s-version-385687 kubelet[1416]: E1212 00:21:04.253856 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.959504 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:09 old-k8s-version-385687 kubelet[1416]: E1212 00:21:09.253849 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.959703 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:17 old-k8s-version-385687 kubelet[1416]: E1212 00:21:17.254163 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.959904 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:20 old-k8s-version-385687 kubelet[1416]: E1212 00:21:20.254162 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.960106 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:28 old-k8s-version-385687 kubelet[1416]: E1212 00:21:28.262765 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.960292 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:34 old-k8s-version-385687 kubelet[1416]: E1212 00:21:34.253963 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.960490 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:41 old-k8s-version-385687 kubelet[1416]: E1212 00:21:41.253839 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.960676 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:46 old-k8s-version-385687 kubelet[1416]: E1212 00:21:46.258966 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.960874 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:52 old-k8s-version-385687 kubelet[1416]: E1212 00:21:52.254181 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.961060 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:59 old-k8s-version-385687 kubelet[1416]: E1212 00:21:59.253931 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.961258 1748736 logs.go:138] Found kubelet problem: Dec 12 00:22:05 old-k8s-version-385687 kubelet[1416]: E1212 00:22:05.253864 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.961443 1748736 logs.go:138] Found kubelet problem: Dec 12 00:22:13 old-k8s-version-385687 kubelet[1416]: E1212 00:22:13.253794 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.961641 1748736 logs.go:138] Found kubelet problem: Dec 12 00:22:19 old-k8s-version-385687 kubelet[1416]: E1212 00:22:19.253937 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.961827 1748736 logs.go:138] Found kubelet problem: Dec 12 00:22:28 old-k8s-version-385687 kubelet[1416]: E1212 00:22:28.255844 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1212 00:22:28.961837 1748736 logs.go:123] Gathering logs for describe nodes ...
I1212 00:22:28.961852 1748736 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1212 00:22:29.104804 1748736 logs.go:123] Gathering logs for kube-apiserver [b01ba427f07f] ...
I1212 00:22:29.104833 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b01ba427f07f"
I1212 00:22:29.147972 1748736 logs.go:123] Gathering logs for kube-apiserver [d8c189ec5293] ...
I1212 00:22:29.148008 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8c189ec5293"
I1212 00:22:29.213257 1748736 logs.go:123] Gathering logs for kube-controller-manager [b9c1c621b89a] ...
I1212 00:22:29.213294 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9c1c621b89a"
I1212 00:22:29.256200 1748736 logs.go:123] Gathering logs for storage-provisioner [f96d450a47e8] ...
I1212 00:22:29.256293 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f96d450a47e8"
I1212 00:22:29.280038 1748736 logs.go:123] Gathering logs for container status ...
I1212 00:22:29.280110 1748736 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1212 00:22:29.348232 1748736 logs.go:123] Gathering logs for etcd [c87014549ad3] ...
I1212 00:22:29.348264 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87014549ad3"
I1212 00:22:29.375100 1748736 logs.go:123] Gathering logs for coredns [c1eb50b61731] ...
I1212 00:22:29.375132 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1eb50b61731"
I1212 00:22:29.401243 1748736 logs.go:123] Gathering logs for coredns [26c238c9510b] ...
I1212 00:22:29.401271 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26c238c9510b"
I1212 00:22:29.424125 1748736 logs.go:123] Gathering logs for kube-scheduler [4fb87ba7f570] ...
I1212 00:22:29.424153 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fb87ba7f570"
I1212 00:22:29.447967 1748736 logs.go:123] Gathering logs for kube-scheduler [e1f79e78d53d] ...
I1212 00:22:29.447996 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1f79e78d53d"
I1212 00:22:29.473912 1748736 logs.go:123] Gathering logs for kube-proxy [0bb2da207c94] ...
I1212 00:22:29.473945 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bb2da207c94"
I1212 00:22:29.496691 1748736 logs.go:123] Gathering logs for kube-proxy [d2ad7ae21ec1] ...
I1212 00:22:29.496719 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2ad7ae21ec1"
I1212 00:22:29.519367 1748736 logs.go:123] Gathering logs for storage-provisioner [5567aacfd876] ...
I1212 00:22:29.519395 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5567aacfd876"
I1212 00:22:29.541417 1748736 logs.go:123] Gathering logs for Docker ...
I1212 00:22:29.541444 1748736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I1212 00:22:29.568180 1748736 out.go:358] Setting ErrFile to fd 2...
I1212 00:22:29.568210 1748736 out.go:392] TERM=,COLORTERM=, which probably does not support color
W1212 00:22:29.568286 1748736 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W1212 00:22:29.568303 1748736 out.go:270] Dec 12 00:21:59 old-k8s-version-385687 kubelet[1416]: E1212 00:21:59.253931 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 12 00:21:59 old-k8s-version-385687 kubelet[1416]: E1212 00:21:59.253931 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:29.568309 1748736 out.go:270] Dec 12 00:22:05 old-k8s-version-385687 kubelet[1416]: E1212 00:22:05.253864 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Dec 12 00:22:05 old-k8s-version-385687 kubelet[1416]: E1212 00:22:05.253864 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:29.568335 1748736 out.go:270] Dec 12 00:22:13 old-k8s-version-385687 kubelet[1416]: E1212 00:22:13.253794 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 12 00:22:13 old-k8s-version-385687 kubelet[1416]: E1212 00:22:13.253794 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:29.568352 1748736 out.go:270] Dec 12 00:22:19 old-k8s-version-385687 kubelet[1416]: E1212 00:22:19.253937 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Dec 12 00:22:19 old-k8s-version-385687 kubelet[1416]: E1212 00:22:19.253937 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:29.568357 1748736 out.go:270] Dec 12 00:22:28 old-k8s-version-385687 kubelet[1416]: E1212 00:22:28.255844 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 12 00:22:28 old-k8s-version-385687 kubelet[1416]: E1212 00:22:28.255844 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1212 00:22:29.568365 1748736 out.go:358] Setting ErrFile to fd 2...
I1212 00:22:29.568373 1748736 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1212 00:22:39.569398 1748736 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I1212 00:22:39.586317 1748736 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I1212 00:22:39.589497 1748736 out.go:201]
W1212 00:22:39.592417 1748736 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W1212 00:22:39.592460 1748736 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
* Suggestion: Control Plane could not update, try minikube delete --all --purge
W1212 00:22:39.592479 1748736 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
* Related issue: https://github.com/kubernetes/minikube/issues/11417
W1212 00:22:39.592491 1748736 out.go:270] *
*
W1212 00:22:39.593441 1748736 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1212 00:22:39.595524 1748736 out.go:201]
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-385687 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=docker --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect old-k8s-version-385687
helpers_test.go:235: (dbg) docker inspect old-k8s-version-385687:
-- stdout --
[
{
"Id": "03ebc572b56d38fe493fe0dcb44189098fb032c839df3e3b3b5807a6261fb1f4",
"Created": "2024-12-12T00:13:39.176436087Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 1748947,
"ExitCode": 0,
"Error": "",
"StartedAt": "2024-12-12T00:16:23.114991959Z",
"FinishedAt": "2024-12-12T00:16:21.909459148Z"
},
"Image": "sha256:02e8be8b1127faa30f09fff745d2a6d385248178d204468bf667a69a71dbf447",
"ResolvConfPath": "/var/lib/docker/containers/03ebc572b56d38fe493fe0dcb44189098fb032c839df3e3b3b5807a6261fb1f4/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/03ebc572b56d38fe493fe0dcb44189098fb032c839df3e3b3b5807a6261fb1f4/hostname",
"HostsPath": "/var/lib/docker/containers/03ebc572b56d38fe493fe0dcb44189098fb032c839df3e3b3b5807a6261fb1f4/hosts",
"LogPath": "/var/lib/docker/containers/03ebc572b56d38fe493fe0dcb44189098fb032c839df3e3b3b5807a6261fb1f4/03ebc572b56d38fe493fe0dcb44189098fb032c839df3e3b3b5807a6261fb1f4-json.log",
"Name": "/old-k8s-version-385687",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"old-k8s-version-385687:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "old-k8s-version-385687",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/e21e8f654b8d47b0a4ce55dcc5e0a5c5c4161234ff934a19944a033374968b2f-init/diff:/var/lib/docker/overlay2/9fd517e46cfc70e1c050deaeb2050990b654fdf2e2cbae0898d8f980e5d9e85e/diff",
"MergedDir": "/var/lib/docker/overlay2/e21e8f654b8d47b0a4ce55dcc5e0a5c5c4161234ff934a19944a033374968b2f/merged",
"UpperDir": "/var/lib/docker/overlay2/e21e8f654b8d47b0a4ce55dcc5e0a5c5c4161234ff934a19944a033374968b2f/diff",
"WorkDir": "/var/lib/docker/overlay2/e21e8f654b8d47b0a4ce55dcc5e0a5c5c4161234ff934a19944a033374968b2f/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "old-k8s-version-385687",
"Source": "/var/lib/docker/volumes/old-k8s-version-385687/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "old-k8s-version-385687",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "old-k8s-version-385687",
"name.minikube.sigs.k8s.io": "old-k8s-version-385687",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "d6b25f06e3df8781a174ae67b3548aa4f3711b49f70e89f715f42290d9517318",
"SandboxKey": "/var/run/docker/netns/d6b25f06e3df",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34605"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34606"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34609"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34607"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34608"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"old-k8s-version-385687": {
"IPAMConfig": {
"IPv4Address": "192.168.76.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:4c:02",
"DriverOpts": null,
"NetworkID": "18ca458940f19f176e7ab54693402752106bad55bb40210b35d0b12cd3605a48",
"EndpointID": "ba52f17aab10dc9eba6602f569abc5868c7f7992cf8498f68e79bfdfd24a71f5",
"Gateway": "192.168.76.1",
"IPAddress": "192.168.76.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"old-k8s-version-385687",
"03ebc572b56d"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-385687 -n old-k8s-version-385687
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p old-k8s-version-385687 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-385687 logs -n 25: (1.472931601s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
| delete | -p pause-429836 | pause-429836 | jenkins | v1.34.0 | 12 Dec 24 00:11 UTC | 12 Dec 24 00:11 UTC |
| | --alsologtostderr -v=5 | | | | | |
| delete | -p pause-429836 | pause-429836 | jenkins | v1.34.0 | 12 Dec 24 00:11 UTC | 12 Dec 24 00:11 UTC |
| start | -p cert-expiration-004495 | cert-expiration-004495 | jenkins | v1.34.0 | 12 Dec 24 00:11 UTC | 12 Dec 24 00:12 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=3m | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| ssh | force-systemd-env-299395 | force-systemd-env-299395 | jenkins | v1.34.0 | 12 Dec 24 00:12 UTC | 12 Dec 24 00:12 UTC |
| | ssh docker info --format | | | | | |
| | {{.CgroupDriver}} | | | | | |
| delete | -p force-systemd-env-299395 | force-systemd-env-299395 | jenkins | v1.34.0 | 12 Dec 24 00:12 UTC | 12 Dec 24 00:12 UTC |
| start | -p docker-flags-626767 | docker-flags-626767 | jenkins | v1.34.0 | 12 Dec 24 00:12 UTC | 12 Dec 24 00:12 UTC |
| | --cache-images=false | | | | | |
| | --memory=2048 | | | | | |
| | --install-addons=false | | | | | |
| | --wait=false | | | | | |
| | --docker-env=FOO=BAR | | | | | |
| | --docker-env=BAZ=BAT | | | | | |
| | --docker-opt=debug | | | | | |
| | --docker-opt=icc=true | | | | | |
| | --alsologtostderr | | | | | |
| | -v=5 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| ssh | docker-flags-626767 ssh | docker-flags-626767 | jenkins | v1.34.0 | 12 Dec 24 00:12 UTC | 12 Dec 24 00:12 UTC |
| | sudo systemctl show docker | | | | | |
| | --property=Environment | | | | | |
| | --no-pager | | | | | |
| ssh | docker-flags-626767 ssh | docker-flags-626767 | jenkins | v1.34.0 | 12 Dec 24 00:12 UTC | 12 Dec 24 00:12 UTC |
| | sudo systemctl show docker | | | | | |
| | --property=ExecStart | | | | | |
| | --no-pager | | | | | |
| delete | -p docker-flags-626767 | docker-flags-626767 | jenkins | v1.34.0 | 12 Dec 24 00:12 UTC | 12 Dec 24 00:12 UTC |
| start | -p cert-options-151095 | cert-options-151095 | jenkins | v1.34.0 | 12 Dec 24 00:12 UTC | 12 Dec 24 00:13 UTC |
| | --memory=2048 | | | | | |
| | --apiserver-ips=127.0.0.1 | | | | | |
| | --apiserver-ips=192.168.15.15 | | | | | |
| | --apiserver-names=localhost | | | | | |
| | --apiserver-names=www.google.com | | | | | |
| | --apiserver-port=8555 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| ssh | cert-options-151095 ssh | cert-options-151095 | jenkins | v1.34.0 | 12 Dec 24 00:13 UTC | 12 Dec 24 00:13 UTC |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-151095 -- sudo | cert-options-151095 | jenkins | v1.34.0 | 12 Dec 24 00:13 UTC | 12 Dec 24 00:13 UTC |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-151095 | cert-options-151095 | jenkins | v1.34.0 | 12 Dec 24 00:13 UTC | 12 Dec 24 00:13 UTC |
| start | -p old-k8s-version-385687 | old-k8s-version-385687 | jenkins | v1.34.0 | 12 Dec 24 00:13 UTC | 12 Dec 24 00:15 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| start | -p cert-expiration-004495 | cert-expiration-004495 | jenkins | v1.34.0 | 12 Dec 24 00:15 UTC | 12 Dec 24 00:16 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p cert-expiration-004495 | cert-expiration-004495 | jenkins | v1.34.0 | 12 Dec 24 00:16 UTC | 12 Dec 24 00:16 UTC |
| addons | enable metrics-server -p old-k8s-version-385687 | old-k8s-version-385687 | jenkins | v1.34.0 | 12 Dec 24 00:16 UTC | 12 Dec 24 00:16 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| start | -p no-preload-828307 | no-preload-828307 | jenkins | v1.34.0 | 12 Dec 24 00:16 UTC | 12 Dec 24 00:17 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --kubernetes-version=v1.31.2 | | | | | |
| stop | -p old-k8s-version-385687 | old-k8s-version-385687 | jenkins | v1.34.0 | 12 Dec 24 00:16 UTC | 12 Dec 24 00:16 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p old-k8s-version-385687 | old-k8s-version-385687 | jenkins | v1.34.0 | 12 Dec 24 00:16 UTC | 12 Dec 24 00:16 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p old-k8s-version-385687 | old-k8s-version-385687 | jenkins | v1.34.0 | 12 Dec 24 00:16 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| addons | enable metrics-server -p no-preload-828307 | no-preload-828307 | jenkins | v1.34.0 | 12 Dec 24 00:17 UTC | 12 Dec 24 00:17 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p no-preload-828307 | no-preload-828307 | jenkins | v1.34.0 | 12 Dec 24 00:17 UTC | 12 Dec 24 00:18 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p no-preload-828307 | no-preload-828307 | jenkins | v1.34.0 | 12 Dec 24 00:18 UTC | 12 Dec 24 00:18 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p no-preload-828307 | no-preload-828307 | jenkins | v1.34.0 | 12 Dec 24 00:18 UTC | 12 Dec 24 00:22 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --kubernetes-version=v1.31.2 | | | | | |
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/12/12 00:18:04
Running on machine: ip-172-31-21-244
Binary: Built with gc go1.23.3 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1212 00:18:04.558670 1756238 out.go:345] Setting OutFile to fd 1 ...
I1212 00:18:04.558927 1756238 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1212 00:18:04.558950 1756238 out.go:358] Setting ErrFile to fd 2...
I1212 00:18:04.558956 1756238 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1212 00:18:04.559312 1756238 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-1433638/.minikube/bin
I1212 00:18:04.559873 1756238 out.go:352] Setting JSON to false
I1212 00:18:04.560967 1756238 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":25232,"bootTime":1733937453,"procs":244,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
I1212 00:18:04.561076 1756238 start.go:139] virtualization:
I1212 00:18:04.564257 1756238 out.go:177] * [no-preload-828307] minikube v1.34.0 on Ubuntu 20.04 (arm64)
I1212 00:18:04.566501 1756238 out.go:177] - MINIKUBE_LOCATION=20083
I1212 00:18:04.566609 1756238 notify.go:220] Checking for updates...
I1212 00:18:04.572197 1756238 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1212 00:18:04.575168 1756238 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20083-1433638/kubeconfig
I1212 00:18:04.578092 1756238 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-1433638/.minikube
I1212 00:18:04.581086 1756238 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I1212 00:18:04.584089 1756238 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I1212 00:18:04.587692 1756238 config.go:182] Loaded profile config "no-preload-828307": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1212 00:18:04.588308 1756238 driver.go:394] Setting default libvirt URI to qemu:///system
I1212 00:18:04.611120 1756238 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
I1212 00:18:04.611244 1756238 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1212 00:18:04.686319 1756238 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-12 00:18:04.675907179 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
I1212 00:18:04.686433 1756238 docker.go:318] overlay module found
I1212 00:18:04.689579 1756238 out.go:177] * Using the docker driver based on existing profile
I1212 00:18:04.692440 1756238 start.go:297] selected driver: docker
I1212 00:18:04.692465 1756238 start.go:901] validating driver "docker" against &{Name:no-preload-828307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-828307 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1212 00:18:04.692585 1756238 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1212 00:18:04.693355 1756238 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1212 00:18:04.745875 1756238 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-12 00:18:04.735702458 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
I1212 00:18:04.746279 1756238 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1212 00:18:04.746308 1756238 cni.go:84] Creating CNI manager for ""
I1212 00:18:04.746361 1756238 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1212 00:18:04.746412 1756238 start.go:340] cluster config:
{Name:no-preload-828307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-828307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mou
ntMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1212 00:18:04.751437 1756238 out.go:177] * Starting "no-preload-828307" primary control-plane node in "no-preload-828307" cluster
I1212 00:18:04.754239 1756238 cache.go:121] Beginning downloading kic base image for docker with docker
I1212 00:18:04.757144 1756238 out.go:177] * Pulling base image v0.0.45-1733912881-20083 ...
I1212 00:18:04.759956 1756238 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
I1212 00:18:04.760001 1756238 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 in local docker daemon
I1212 00:18:04.760136 1756238 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-1433638/.minikube/profiles/no-preload-828307/config.json ...
I1212 00:18:04.760440 1756238 cache.go:107] acquiring lock: {Name:mk55e7d0154655365974aed82f99cccdf2b56d27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1212 00:18:04.760624 1756238 cache.go:107] acquiring lock: {Name:mk87e71058cc11cc8cb002ea341bed201f672992 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1212 00:18:04.760533 1756238 cache.go:107] acquiring lock: {Name:mkc26bcadb6540d0f1a283459e91f2a811590008 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1212 00:18:04.760685 1756238 cache.go:107] acquiring lock: {Name:mk5eed7827f940451ba5be4c946583bc8e762ae2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1212 00:18:04.760704 1756238 cache.go:115] /home/jenkins/minikube-integration/20083-1433638/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
I1212 00:18:04.760716 1756238 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20083-1433638/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 281.347µs
I1212 00:18:04.760726 1756238 cache.go:115] /home/jenkins/minikube-integration/20083-1433638/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
I1212 00:18:04.760733 1756238 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20083-1433638/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
I1212 00:18:04.760734 1756238 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20083-1433638/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 50.009µs
I1212 00:18:04.760746 1756238 cache.go:107] acquiring lock: {Name:mk5efc88aee946a95d68a49a5341cca6cb635ce9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1212 00:18:04.760762 1756238 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20083-1433638/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
I1212 00:18:04.760599 1756238 cache.go:107] acquiring lock: {Name:mk86c54f4da9579e7d113884532977c7b9a2727e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1212 00:18:04.760783 1756238 cache.go:115] /home/jenkins/minikube-integration/20083-1433638/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
I1212 00:18:04.760789 1756238 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/20083-1433638/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 44.553µs
I1212 00:18:04.760795 1756238 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/20083-1433638/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
I1212 00:18:04.760580 1756238 cache.go:107] acquiring lock: {Name:mk2d12a223cd8fa5782723f07e53704aadc70add Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1212 00:18:04.760806 1756238 cache.go:107] acquiring lock: {Name:mk2cf963eeea4f6fc51c3f8e74f6ae6ac1acd975 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1212 00:18:04.760827 1756238 cache.go:115] /home/jenkins/minikube-integration/20083-1433638/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 exists
I1212 00:18:04.760836 1756238 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.2" -> "/home/jenkins/minikube-integration/20083-1433638/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2" took 257.766µs
I1212 00:18:04.760842 1756238 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.2 -> /home/jenkins/minikube-integration/20083-1433638/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 succeeded
I1212 00:18:04.760845 1756238 cache.go:115] /home/jenkins/minikube-integration/20083-1433638/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
I1212 00:18:04.760851 1756238 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20083-1433638/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 45.8µs
I1212 00:18:04.760858 1756238 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20083-1433638/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
I1212 00:18:04.760795 1756238 cache.go:115] /home/jenkins/minikube-integration/20083-1433638/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 exists
I1212 00:18:04.760870 1756238 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.2" -> "/home/jenkins/minikube-integration/20083-1433638/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2" took 272.814µs
I1212 00:18:04.760874 1756238 cache.go:115] /home/jenkins/minikube-integration/20083-1433638/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 exists
I1212 00:18:04.760858 1756238 cache.go:115] /home/jenkins/minikube-integration/20083-1433638/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 exists
I1212 00:18:04.760881 1756238 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.2" -> "/home/jenkins/minikube-integration/20083-1433638/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2" took 258.168µs
I1212 00:18:04.760883 1756238 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.2" -> "/home/jenkins/minikube-integration/20083-1433638/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2" took 364.536µs
I1212 00:18:04.760903 1756238 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.2 -> /home/jenkins/minikube-integration/20083-1433638/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 succeeded
I1212 00:18:04.760876 1756238 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.2 -> /home/jenkins/minikube-integration/20083-1433638/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 succeeded
I1212 00:18:04.760887 1756238 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.2 -> /home/jenkins/minikube-integration/20083-1433638/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 succeeded
I1212 00:18:04.760914 1756238 cache.go:87] Successfully saved all images to host disk.
I1212 00:18:04.798956 1756238 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 in local docker daemon, skipping pull
I1212 00:18:04.798981 1756238 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 exists in daemon, skipping load
I1212 00:18:04.798994 1756238 cache.go:194] Successfully downloaded all kic artifacts
I1212 00:18:04.799025 1756238 start.go:360] acquireMachinesLock for no-preload-828307: {Name:mkf8d2871e0eeaa3740eb2040ffcdbdb49263f6d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1212 00:18:04.799083 1756238 start.go:364] duration metric: took 37.817µs to acquireMachinesLock for "no-preload-828307"
I1212 00:18:04.799107 1756238 start.go:96] Skipping create...Using existing machine configuration
I1212 00:18:04.799117 1756238 fix.go:54] fixHost starting:
I1212 00:18:04.799382 1756238 cli_runner.go:164] Run: docker container inspect no-preload-828307 --format={{.State.Status}}
I1212 00:18:04.815862 1756238 fix.go:112] recreateIfNeeded on no-preload-828307: state=Stopped err=<nil>
W1212 00:18:04.815893 1756238 fix.go:138] unexpected machine state, will restart: <nil>
I1212 00:18:04.819164 1756238 out.go:177] * Restarting existing docker container for "no-preload-828307" ...
I1212 00:18:02.790391 1748736 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:04.796196 1748736 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:07.290504 1748736 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:04.822038 1756238 cli_runner.go:164] Run: docker start no-preload-828307
I1212 00:18:05.182007 1756238 cli_runner.go:164] Run: docker container inspect no-preload-828307 --format={{.State.Status}}
I1212 00:18:05.205057 1756238 kic.go:430] container "no-preload-828307" state is running.
I1212 00:18:05.207370 1756238 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-828307
I1212 00:18:05.230918 1756238 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-1433638/.minikube/profiles/no-preload-828307/config.json ...
I1212 00:18:05.231147 1756238 machine.go:93] provisionDockerMachine start ...
I1212 00:18:05.231214 1756238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-828307
I1212 00:18:05.251650 1756238 main.go:141] libmachine: Using SSH client type: native
I1212 00:18:05.252147 1756238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil> [] 0s} 127.0.0.1 34610 <nil> <nil>}
I1212 00:18:05.252240 1756238 main.go:141] libmachine: About to run SSH command:
hostname
I1212 00:18:05.253289 1756238 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I1212 00:18:08.395198 1756238 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-828307
I1212 00:18:08.395227 1756238 ubuntu.go:169] provisioning hostname "no-preload-828307"
I1212 00:18:08.395293 1756238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-828307
I1212 00:18:08.414203 1756238 main.go:141] libmachine: Using SSH client type: native
I1212 00:18:08.414465 1756238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil> [] 0s} 127.0.0.1 34610 <nil> <nil>}
I1212 00:18:08.414478 1756238 main.go:141] libmachine: About to run SSH command:
sudo hostname no-preload-828307 && echo "no-preload-828307" | sudo tee /etc/hostname
I1212 00:18:08.563320 1756238 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-828307
I1212 00:18:08.563532 1756238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-828307
I1212 00:18:08.583725 1756238 main.go:141] libmachine: Using SSH client type: native
I1212 00:18:08.584100 1756238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil> [] 0s} 127.0.0.1 34610 <nil> <nil>}
I1212 00:18:08.584128 1756238 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sno-preload-828307' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-828307/g' /etc/hosts;
else
echo '127.0.1.1 no-preload-828307' | sudo tee -a /etc/hosts;
fi
fi
I1212 00:18:08.727891 1756238 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1212 00:18:08.727923 1756238 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20083-1433638/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-1433638/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-1433638/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-1433638/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-1433638/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-1433638/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-1433638/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-1433638/.minikube}
I1212 00:18:08.727955 1756238 ubuntu.go:177] setting up certificates
I1212 00:18:08.727966 1756238 provision.go:84] configureAuth start
I1212 00:18:08.728028 1756238 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-828307
I1212 00:18:08.746798 1756238 provision.go:143] copyHostCerts
I1212 00:18:08.746866 1756238 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-1433638/.minikube/cert.pem, removing ...
I1212 00:18:08.746875 1756238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-1433638/.minikube/cert.pem
I1212 00:18:08.746958 1756238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-1433638/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-1433638/.minikube/cert.pem (1123 bytes)
I1212 00:18:08.747055 1756238 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-1433638/.minikube/key.pem, removing ...
I1212 00:18:08.747060 1756238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-1433638/.minikube/key.pem
I1212 00:18:08.747085 1756238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-1433638/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-1433638/.minikube/key.pem (1675 bytes)
I1212 00:18:08.747140 1756238 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-1433638/.minikube/ca.pem, removing ...
I1212 00:18:08.747145 1756238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-1433638/.minikube/ca.pem
I1212 00:18:08.747168 1756238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-1433638/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-1433638/.minikube/ca.pem (1082 bytes)
I1212 00:18:08.747262 1756238 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-1433638/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-1433638/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-1433638/.minikube/certs/ca-key.pem org=jenkins.no-preload-828307 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-828307]
I1212 00:18:09.087080 1756238 provision.go:177] copyRemoteCerts
I1212 00:18:09.087167 1756238 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1212 00:18:09.087213 1756238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-828307
I1212 00:18:09.105976 1756238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34610 SSHKeyPath:/home/jenkins/minikube-integration/20083-1433638/.minikube/machines/no-preload-828307/id_rsa Username:docker}
I1212 00:18:09.201320 1756238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-1433638/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1212 00:18:09.229424 1756238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-1433638/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I1212 00:18:09.255306 1756238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-1433638/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1212 00:18:09.281384 1756238 provision.go:87] duration metric: took 553.403933ms to configureAuth
I1212 00:18:09.281415 1756238 ubuntu.go:193] setting minikube options for container-runtime
I1212 00:18:09.281647 1756238 config.go:182] Loaded profile config "no-preload-828307": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1212 00:18:09.281723 1756238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-828307
I1212 00:18:09.302736 1756238 main.go:141] libmachine: Using SSH client type: native
I1212 00:18:09.303236 1756238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil> [] 0s} 127.0.0.1 34610 <nil> <nil>}
I1212 00:18:09.303257 1756238 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1212 00:18:09.437466 1756238 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I1212 00:18:09.437491 1756238 ubuntu.go:71] root file system type: overlay
I1212 00:18:09.437613 1756238 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I1212 00:18:09.437685 1756238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-828307
I1212 00:18:09.456557 1756238 main.go:141] libmachine: Using SSH client type: native
I1212 00:18:09.456805 1756238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil> [] 0s} 127.0.0.1 34610 <nil> <nil>}
I1212 00:18:09.456883 1756238 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1212 00:18:09.293464 1748736 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:11.790980 1748736 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:09.604939 1756238 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I1212 00:18:09.605025 1756238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-828307
I1212 00:18:09.623661 1756238 main.go:141] libmachine: Using SSH client type: native
I1212 00:18:09.624035 1756238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil> [] 0s} 127.0.0.1 34610 <nil> <nil>}
I1212 00:18:09.624095 1756238 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1212 00:18:09.782137 1756238 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1212 00:18:09.782160 1756238 machine.go:96] duration metric: took 4.551004182s to provisionDockerMachine
I1212 00:18:09.782173 1756238 start.go:293] postStartSetup for "no-preload-828307" (driver="docker")
I1212 00:18:09.782198 1756238 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1212 00:18:09.782271 1756238 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1212 00:18:09.782309 1756238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-828307
I1212 00:18:09.804713 1756238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34610 SSHKeyPath:/home/jenkins/minikube-integration/20083-1433638/.minikube/machines/no-preload-828307/id_rsa Username:docker}
I1212 00:18:09.900899 1756238 ssh_runner.go:195] Run: cat /etc/os-release
I1212 00:18:09.904430 1756238 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1212 00:18:09.904466 1756238 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1212 00:18:09.904478 1756238 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1212 00:18:09.904485 1756238 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I1212 00:18:09.904496 1756238 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-1433638/.minikube/addons for local assets ...
I1212 00:18:09.904565 1756238 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-1433638/.minikube/files for local assets ...
I1212 00:18:09.904644 1756238 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-1433638/.minikube/files/etc/ssl/certs/14390162.pem -> 14390162.pem in /etc/ssl/certs
I1212 00:18:09.904751 1756238 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1212 00:18:09.914420 1756238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-1433638/.minikube/files/etc/ssl/certs/14390162.pem --> /etc/ssl/certs/14390162.pem (1708 bytes)
I1212 00:18:09.940016 1756238 start.go:296] duration metric: took 157.827784ms for postStartSetup
I1212 00:18:09.940156 1756238 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1212 00:18:09.940221 1756238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-828307
I1212 00:18:09.958189 1756238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34610 SSHKeyPath:/home/jenkins/minikube-integration/20083-1433638/.minikube/machines/no-preload-828307/id_rsa Username:docker}
I1212 00:18:10.049458 1756238 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1212 00:18:10.054607 1756238 fix.go:56] duration metric: took 5.255481716s for fixHost
I1212 00:18:10.054636 1756238 start.go:83] releasing machines lock for "no-preload-828307", held for 5.25553856s
I1212 00:18:10.054705 1756238 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-828307
I1212 00:18:10.073296 1756238 ssh_runner.go:195] Run: cat /version.json
I1212 00:18:10.073350 1756238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-828307
I1212 00:18:10.073636 1756238 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1212 00:18:10.073700 1756238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-828307
I1212 00:18:10.100286 1756238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34610 SSHKeyPath:/home/jenkins/minikube-integration/20083-1433638/.minikube/machines/no-preload-828307/id_rsa Username:docker}
I1212 00:18:10.114965 1756238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34610 SSHKeyPath:/home/jenkins/minikube-integration/20083-1433638/.minikube/machines/no-preload-828307/id_rsa Username:docker}
I1212 00:18:10.203113 1756238 ssh_runner.go:195] Run: systemctl --version
I1212 00:18:10.358979 1756238 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I1212 00:18:10.363886 1756238 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I1212 00:18:10.385066 1756238 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I1212 00:18:10.385175 1756238 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1212 00:18:10.394520 1756238 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I1212 00:18:10.394548 1756238 start.go:495] detecting cgroup driver to use...
I1212 00:18:10.394581 1756238 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1212 00:18:10.394699 1756238 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1212 00:18:10.412391 1756238 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I1212 00:18:10.422871 1756238 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1212 00:18:10.434991 1756238 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1212 00:18:10.435115 1756238 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1212 00:18:10.446154 1756238 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1212 00:18:10.457063 1756238 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1212 00:18:10.467806 1756238 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1212 00:18:10.478215 1756238 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1212 00:18:10.487866 1756238 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1212 00:18:10.498605 1756238 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1212 00:18:10.509492 1756238 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1212 00:18:10.520009 1756238 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1212 00:18:10.530085 1756238 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1212 00:18:10.539525 1756238 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1212 00:18:10.637634 1756238 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1212 00:18:10.776978 1756238 start.go:495] detecting cgroup driver to use...
I1212 00:18:10.777075 1756238 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1212 00:18:10.777160 1756238 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1212 00:18:10.802948 1756238 cruntime.go:279] skipping containerd shutdown because we are bound to it
I1212 00:18:10.803102 1756238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1212 00:18:10.818869 1756238 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1212 00:18:10.838267 1756238 ssh_runner.go:195] Run: which cri-dockerd
I1212 00:18:10.843465 1756238 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1212 00:18:10.855960 1756238 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I1212 00:18:10.881569 1756238 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1212 00:18:10.991579 1756238 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1212 00:18:11.108482 1756238 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I1212 00:18:11.108617 1756238 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I1212 00:18:11.129776 1756238 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1212 00:18:11.260937 1756238 ssh_runner.go:195] Run: sudo systemctl restart docker
I1212 00:18:11.803545 1756238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I1212 00:18:11.816091 1756238 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
I1212 00:18:11.830149 1756238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I1212 00:18:11.842905 1756238 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I1212 00:18:11.929048 1756238 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1212 00:18:12.031201 1756238 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1212 00:18:12.119963 1756238 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I1212 00:18:12.136150 1756238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I1212 00:18:12.149249 1756238 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1212 00:18:12.275143 1756238 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I1212 00:18:12.379650 1756238 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I1212 00:18:12.379772 1756238 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I1212 00:18:12.386373 1756238 start.go:563] Will wait 60s for crictl version
I1212 00:18:12.386521 1756238 ssh_runner.go:195] Run: which crictl
I1212 00:18:12.392226 1756238 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1212 00:18:12.674145 1756238 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.4.0
RuntimeApiVersion: v1
I1212 00:18:12.674240 1756238 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1212 00:18:12.718207 1756238 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1212 00:18:12.745806 1756238 out.go:235] * Preparing Kubernetes v1.31.2 on Docker 27.4.0 ...
I1212 00:18:12.745946 1756238 cli_runner.go:164] Run: docker network inspect no-preload-828307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1212 00:18:12.767395 1756238 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I1212 00:18:12.771369 1756238 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1212 00:18:12.782680 1756238 kubeadm.go:883] updating cluster {Name:no-preload-828307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-828307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1212 00:18:12.782809 1756238 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
I1212 00:18:12.782862 1756238 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1212 00:18:12.813403 1756238 docker.go:689] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/pause:3.10
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
-- /stdout --
I1212 00:18:12.813434 1756238 cache_images.go:84] Images are preloaded, skipping loading
I1212 00:18:12.813445 1756238 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.31.2 docker true true} ...
I1212 00:18:12.813568 1756238 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-828307 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.31.2 ClusterName:no-preload-828307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1212 00:18:12.813641 1756238 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I1212 00:18:12.901046 1756238 cni.go:84] Creating CNI manager for ""
I1212 00:18:12.901076 1756238 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1212 00:18:12.901088 1756238 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I1212 00:18:12.901108 1756238 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-828307 NodeName:no-preload-828307 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1212 00:18:12.901239 1756238 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "no-preload-828307"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.85.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.31.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1212 00:18:12.901311 1756238 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
I1212 00:18:12.911559 1756238 binaries.go:44] Found k8s binaries, skipping transfer
I1212 00:18:12.911661 1756238 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1212 00:18:12.920583 1756238 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
I1212 00:18:12.938830 1756238 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1212 00:18:12.957306 1756238 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
I1212 00:18:12.976322 1756238 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I1212 00:18:12.980129 1756238 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1212 00:18:12.991552 1756238 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1212 00:18:13.080062 1756238 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1212 00:18:13.096522 1756238 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-1433638/.minikube/profiles/no-preload-828307 for IP: 192.168.85.2
I1212 00:18:13.096541 1756238 certs.go:194] generating shared ca certs ...
I1212 00:18:13.096562 1756238 certs.go:226] acquiring lock for ca certs: {Name:mk79f9e2f05bff5bb27ff07029e74e2d72f5e267 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 00:18:13.096696 1756238 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-1433638/.minikube/ca.key
I1212 00:18:13.096739 1756238 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-1433638/.minikube/proxy-client-ca.key
I1212 00:18:13.096745 1756238 certs.go:256] generating profile certs ...
I1212 00:18:13.096836 1756238 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-1433638/.minikube/profiles/no-preload-828307/client.key
I1212 00:18:13.096879 1756238 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20083-1433638/.minikube/profiles/no-preload-828307/apiserver.key.c0aef102
I1212 00:18:13.096917 1756238 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-1433638/.minikube/profiles/no-preload-828307/proxy-client.key
I1212 00:18:13.097027 1756238 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-1433638/.minikube/certs/1439016.pem (1338 bytes)
W1212 00:18:13.097058 1756238 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-1433638/.minikube/certs/1439016_empty.pem, impossibly tiny 0 bytes
I1212 00:18:13.097066 1756238 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-1433638/.minikube/certs/ca-key.pem (1675 bytes)
I1212 00:18:13.097089 1756238 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-1433638/.minikube/certs/ca.pem (1082 bytes)
I1212 00:18:13.097114 1756238 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-1433638/.minikube/certs/cert.pem (1123 bytes)
I1212 00:18:13.097134 1756238 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-1433638/.minikube/certs/key.pem (1675 bytes)
I1212 00:18:13.097179 1756238 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-1433638/.minikube/files/etc/ssl/certs/14390162.pem (1708 bytes)
I1212 00:18:13.097917 1756238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-1433638/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1212 00:18:13.141823 1756238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-1433638/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1212 00:18:13.171312 1756238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-1433638/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1212 00:18:13.203066 1756238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-1433638/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1212 00:18:13.277993 1756238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-1433638/.minikube/profiles/no-preload-828307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I1212 00:18:13.329551 1756238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-1433638/.minikube/profiles/no-preload-828307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1212 00:18:13.366619 1756238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-1433638/.minikube/profiles/no-preload-828307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1212 00:18:13.398557 1756238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-1433638/.minikube/profiles/no-preload-828307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1212 00:18:13.430662 1756238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-1433638/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1212 00:18:13.461084 1756238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-1433638/.minikube/certs/1439016.pem --> /usr/share/ca-certificates/1439016.pem (1338 bytes)
I1212 00:18:13.488988 1756238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-1433638/.minikube/files/etc/ssl/certs/14390162.pem --> /usr/share/ca-certificates/14390162.pem (1708 bytes)
I1212 00:18:13.520628 1756238 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1212 00:18:13.540511 1756238 ssh_runner.go:195] Run: openssl version
I1212 00:18:13.548822 1756238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1212 00:18:13.561959 1756238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1212 00:18:13.565927 1756238 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:21 /usr/share/ca-certificates/minikubeCA.pem
I1212 00:18:13.566026 1756238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1212 00:18:13.576130 1756238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1212 00:18:13.587568 1756238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1439016.pem && ln -fs /usr/share/ca-certificates/1439016.pem /etc/ssl/certs/1439016.pem"
I1212 00:18:13.608481 1756238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1439016.pem
I1212 00:18:13.612221 1756238 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:28 /usr/share/ca-certificates/1439016.pem
I1212 00:18:13.612342 1756238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1439016.pem
I1212 00:18:13.619357 1756238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1439016.pem /etc/ssl/certs/51391683.0"
I1212 00:18:13.629797 1756238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14390162.pem && ln -fs /usr/share/ca-certificates/14390162.pem /etc/ssl/certs/14390162.pem"
I1212 00:18:13.640809 1756238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14390162.pem
I1212 00:18:13.644999 1756238 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:28 /usr/share/ca-certificates/14390162.pem
I1212 00:18:13.645115 1756238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14390162.pem
I1212 00:18:13.652860 1756238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14390162.pem /etc/ssl/certs/3ec20f2e.0"
I1212 00:18:13.662589 1756238 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1212 00:18:13.666517 1756238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I1212 00:18:13.673861 1756238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I1212 00:18:13.681053 1756238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I1212 00:18:13.688396 1756238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I1212 00:18:13.695776 1756238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I1212 00:18:13.702915 1756238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I1212 00:18:13.711101 1756238 kubeadm.go:392] StartCluster: {Name:no-preload-828307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-828307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/miniku
be-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1212 00:18:13.712966 1756238 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1212 00:18:13.740288 1756238 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1212 00:18:13.751041 1756238 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I1212 00:18:13.751072 1756238 kubeadm.go:593] restartPrimaryControlPlane start ...
I1212 00:18:13.751166 1756238 ssh_runner.go:195] Run: sudo test -d /data/minikube
I1212 00:18:13.761297 1756238 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I1212 00:18:13.761904 1756238 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-828307" does not appear in /home/jenkins/minikube-integration/20083-1433638/kubeconfig
I1212 00:18:13.762179 1756238 kubeconfig.go:62] /home/jenkins/minikube-integration/20083-1433638/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-828307" cluster setting kubeconfig missing "no-preload-828307" context setting]
I1212 00:18:13.762669 1756238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-1433638/kubeconfig: {Name:mke08f285bdf4a548eaaf91468b606aae00e57d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 00:18:13.764328 1756238 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I1212 00:18:13.774933 1756238 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
I1212 00:18:13.774970 1756238 kubeadm.go:597] duration metric: took 23.890661ms to restartPrimaryControlPlane
I1212 00:18:13.774981 1756238 kubeadm.go:394] duration metric: took 63.903368ms to StartCluster
I1212 00:18:13.775006 1756238 settings.go:142] acquiring lock: {Name:mkf3b256347f08c765a9bedb8db6d14ad0fbedd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 00:18:13.775072 1756238 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20083-1433638/kubeconfig
I1212 00:18:13.776194 1756238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-1433638/kubeconfig: {Name:mke08f285bdf4a548eaaf91468b606aae00e57d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 00:18:13.776415 1756238 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I1212 00:18:13.776872 1756238 config.go:182] Loaded profile config "no-preload-828307": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1212 00:18:13.776930 1756238 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I1212 00:18:13.777055 1756238 addons.go:69] Setting storage-provisioner=true in profile "no-preload-828307"
I1212 00:18:13.777076 1756238 addons.go:234] Setting addon storage-provisioner=true in "no-preload-828307"
W1212 00:18:13.777087 1756238 addons.go:243] addon storage-provisioner should already be in state true
I1212 00:18:13.777110 1756238 host.go:66] Checking if "no-preload-828307" exists ...
I1212 00:18:13.777133 1756238 addons.go:69] Setting default-storageclass=true in profile "no-preload-828307"
I1212 00:18:13.777167 1756238 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-828307"
I1212 00:18:13.777617 1756238 cli_runner.go:164] Run: docker container inspect no-preload-828307 --format={{.State.Status}}
I1212 00:18:13.777875 1756238 cli_runner.go:164] Run: docker container inspect no-preload-828307 --format={{.State.Status}}
I1212 00:18:13.778194 1756238 addons.go:69] Setting dashboard=true in profile "no-preload-828307"
I1212 00:18:13.778213 1756238 addons.go:234] Setting addon dashboard=true in "no-preload-828307"
W1212 00:18:13.778220 1756238 addons.go:243] addon dashboard should already be in state true
I1212 00:18:13.778250 1756238 host.go:66] Checking if "no-preload-828307" exists ...
I1212 00:18:13.778702 1756238 cli_runner.go:164] Run: docker container inspect no-preload-828307 --format={{.State.Status}}
I1212 00:18:13.781319 1756238 addons.go:69] Setting metrics-server=true in profile "no-preload-828307"
I1212 00:18:13.781349 1756238 addons.go:234] Setting addon metrics-server=true in "no-preload-828307"
W1212 00:18:13.781357 1756238 addons.go:243] addon metrics-server should already be in state true
I1212 00:18:13.781397 1756238 host.go:66] Checking if "no-preload-828307" exists ...
I1212 00:18:13.781937 1756238 cli_runner.go:164] Run: docker container inspect no-preload-828307 --format={{.State.Status}}
I1212 00:18:13.783163 1756238 out.go:177] * Verifying Kubernetes components...
I1212 00:18:13.786938 1756238 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1212 00:18:13.840822 1756238 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1212 00:18:13.840894 1756238 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1212 00:18:13.841673 1756238 addons.go:234] Setting addon default-storageclass=true in "no-preload-828307"
W1212 00:18:13.841699 1756238 addons.go:243] addon default-storageclass should already be in state true
I1212 00:18:13.841724 1756238 host.go:66] Checking if "no-preload-828307" exists ...
I1212 00:18:13.842157 1756238 cli_runner.go:164] Run: docker container inspect no-preload-828307 --format={{.State.Status}}
I1212 00:18:13.844905 1756238 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1212 00:18:13.844930 1756238 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1212 00:18:13.844994 1756238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-828307
I1212 00:18:13.850747 1756238 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I1212 00:18:13.853755 1756238 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1212 00:18:13.853782 1756238 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1212 00:18:13.853852 1756238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-828307
I1212 00:18:13.887917 1756238 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I1212 00:18:13.891681 1756238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34610 SSHKeyPath:/home/jenkins/minikube-integration/20083-1433638/.minikube/machines/no-preload-828307/id_rsa Username:docker}
I1212 00:18:13.897571 1756238 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1212 00:18:13.897598 1756238 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1212 00:18:13.897681 1756238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-828307
I1212 00:18:13.919724 1756238 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I1212 00:18:13.919745 1756238 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1212 00:18:13.919812 1756238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-828307
I1212 00:18:13.944670 1756238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34610 SSHKeyPath:/home/jenkins/minikube-integration/20083-1433638/.minikube/machines/no-preload-828307/id_rsa Username:docker}
I1212 00:18:13.977158 1756238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34610 SSHKeyPath:/home/jenkins/minikube-integration/20083-1433638/.minikube/machines/no-preload-828307/id_rsa Username:docker}
I1212 00:18:13.988950 1756238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34610 SSHKeyPath:/home/jenkins/minikube-integration/20083-1433638/.minikube/machines/no-preload-828307/id_rsa Username:docker}
I1212 00:18:14.016312 1756238 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1212 00:18:14.098175 1756238 node_ready.go:35] waiting up to 6m0s for node "no-preload-828307" to be "Ready" ...
I1212 00:18:14.114439 1756238 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1212 00:18:14.114517 1756238 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1212 00:18:14.152826 1756238 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1212 00:18:14.193958 1756238 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1212 00:18:14.193983 1756238 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1212 00:18:14.225121 1756238 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1212 00:18:14.225146 1756238 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I1212 00:18:14.291842 1756238 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1212 00:18:14.293494 1756238 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1212 00:18:14.293571 1756238 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1212 00:18:14.480591 1756238 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1212 00:18:14.480674 1756238 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1212 00:18:14.546410 1756238 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1212 00:18:14.546484 1756238 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I1212 00:18:13.291655 1748736 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"True"
I1212 00:18:13.291678 1748736 pod_ready.go:82] duration metric: took 1m13.507479824s for pod "kube-controller-manager-old-k8s-version-385687" in "kube-system" namespace to be "Ready" ...
I1212 00:18:13.291690 1748736 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dg295" in "kube-system" namespace to be "Ready" ...
I1212 00:18:13.300037 1748736 pod_ready.go:93] pod "kube-proxy-dg295" in "kube-system" namespace has status "Ready":"True"
I1212 00:18:13.300058 1748736 pod_ready.go:82] duration metric: took 8.360272ms for pod "kube-proxy-dg295" in "kube-system" namespace to be "Ready" ...
I1212 00:18:13.300070 1748736 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-385687" in "kube-system" namespace to be "Ready" ...
I1212 00:18:15.337504 1748736 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:17.307937 1748736 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-385687" in "kube-system" namespace has status "Ready":"True"
I1212 00:18:17.308009 1748736 pod_ready.go:82] duration metric: took 4.007922555s for pod "kube-scheduler-old-k8s-version-385687" in "kube-system" namespace to be "Ready" ...
I1212 00:18:17.308036 1748736 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace to be "Ready" ...
I1212 00:18:14.566628 1756238 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1212 00:18:14.566703 1756238 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1212 00:18:14.651162 1756238 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
I1212 00:18:14.651236 1756238 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1212 00:18:14.703901 1756238 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W1212 00:18:14.742573 1756238 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I1212 00:18:14.742684 1756238 retry.go:31] will retry after 255.220092ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I1212 00:18:14.833155 1756238 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1212 00:18:14.833234 1756238 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
W1212 00:18:14.870681 1756238 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I1212 00:18:14.870773 1756238 retry.go:31] will retry after 207.430683ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I1212 00:18:14.925830 1756238 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1212 00:18:14.925903 1756238 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1212 00:18:14.998201 1756238 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W1212 00:18:15.066863 1756238 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I1212 00:18:15.066956 1756238 retry.go:31] will retry after 239.122588ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I1212 00:18:15.079025 1756238 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I1212 00:18:15.244967 1756238 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1212 00:18:15.244994 1756238 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1212 00:18:15.306497 1756238 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1212 00:18:15.500646 1756238 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1212 00:18:15.500675 1756238 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1212 00:18:15.697817 1756238 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1212 00:18:19.502615 1756238 node_ready.go:49] node "no-preload-828307" has status "Ready":"True"
I1212 00:18:19.502646 1756238 node_ready.go:38] duration metric: took 5.404379376s for node "no-preload-828307" to be "Ready" ...
I1212 00:18:19.502658 1756238 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1212 00:18:19.529379 1756238 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8pztl" in "kube-system" namespace to be "Ready" ...
I1212 00:18:19.557332 1756238 pod_ready.go:93] pod "coredns-7c65d6cfc9-8pztl" in "kube-system" namespace has status "Ready":"True"
I1212 00:18:19.557361 1756238 pod_ready.go:82] duration metric: took 27.94852ms for pod "coredns-7c65d6cfc9-8pztl" in "kube-system" namespace to be "Ready" ...
I1212 00:18:19.557374 1756238 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-828307" in "kube-system" namespace to be "Ready" ...
I1212 00:18:19.315696 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:21.316872 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:19.568229 1756238 pod_ready.go:93] pod "etcd-no-preload-828307" in "kube-system" namespace has status "Ready":"True"
I1212 00:18:19.568250 1756238 pod_ready.go:82] duration metric: took 10.868607ms for pod "etcd-no-preload-828307" in "kube-system" namespace to be "Ready" ...
I1212 00:18:19.568262 1756238 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-828307" in "kube-system" namespace to be "Ready" ...
I1212 00:18:19.588988 1756238 pod_ready.go:93] pod "kube-apiserver-no-preload-828307" in "kube-system" namespace has status "Ready":"True"
I1212 00:18:19.589058 1756238 pod_ready.go:82] duration metric: took 20.788356ms for pod "kube-apiserver-no-preload-828307" in "kube-system" namespace to be "Ready" ...
I1212 00:18:19.589085 1756238 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-828307" in "kube-system" namespace to be "Ready" ...
I1212 00:18:19.600192 1756238 pod_ready.go:93] pod "kube-controller-manager-no-preload-828307" in "kube-system" namespace has status "Ready":"True"
I1212 00:18:19.600266 1756238 pod_ready.go:82] duration metric: took 11.157273ms for pod "kube-controller-manager-no-preload-828307" in "kube-system" namespace to be "Ready" ...
I1212 00:18:19.600293 1756238 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-662r7" in "kube-system" namespace to be "Ready" ...
I1212 00:18:19.716365 1756238 pod_ready.go:93] pod "kube-proxy-662r7" in "kube-system" namespace has status "Ready":"True"
I1212 00:18:19.716440 1756238 pod_ready.go:82] duration metric: took 116.123659ms for pod "kube-proxy-662r7" in "kube-system" namespace to be "Ready" ...
I1212 00:18:19.716466 1756238 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-828307" in "kube-system" namespace to be "Ready" ...
I1212 00:18:20.108002 1756238 pod_ready.go:93] pod "kube-scheduler-no-preload-828307" in "kube-system" namespace has status "Ready":"True"
I1212 00:18:20.108079 1756238 pod_ready.go:82] duration metric: took 391.587087ms for pod "kube-scheduler-no-preload-828307" in "kube-system" namespace to be "Ready" ...
I1212 00:18:20.108107 1756238 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace to be "Ready" ...
I1212 00:18:22.121398 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:22.924048 1756238 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.925753471s)
I1212 00:18:22.924119 1756238 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (7.845055262s)
I1212 00:18:23.077137 1756238 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.770594206s)
I1212 00:18:23.077181 1756238 addons.go:475] Verifying addon metrics-server=true in "no-preload-828307"
I1212 00:18:23.137659 1756238 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.439797628s)
I1212 00:18:23.140806 1756238 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p no-preload-828307 addons enable metrics-server
I1212 00:18:23.143767 1756238 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
I1212 00:18:23.146760 1756238 addons.go:510] duration metric: took 9.369824153s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
I1212 00:18:23.317101 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:25.815496 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:24.614439 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:26.616197 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:29.114842 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:28.330359 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:30.888488 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:31.614345 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:33.614679 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:33.314308 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:35.315121 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:36.128963 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:38.613848 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:37.814691 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:40.314898 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:40.614562 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:43.115008 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:42.814239 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:44.814756 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:47.313566 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:45.116260 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:47.613920 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:49.314288 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:51.813227 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:50.115625 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:52.613824 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:53.813950 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:55.815286 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:54.618307 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:57.115005 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:58.314520 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:00.315890 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:18:59.614442 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:02.114750 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:02.813648 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:05.320058 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:04.614373 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:07.114696 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:07.814531 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:10.316881 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:09.615199 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:12.114887 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:12.814679 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:14.814973 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:17.314816 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:14.615069 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:17.113916 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:19.114591 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:19.315313 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:21.814583 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:21.118859 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:23.614606 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:24.314628 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:26.315360 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:26.114284 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:28.114789 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:28.815167 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:31.360816 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:30.121164 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:32.614295 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:33.815121 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:36.315499 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:34.614807 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:37.114332 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:39.117691 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:38.815071 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:41.314791 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:41.614477 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:44.113852 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:43.314867 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:45.813647 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:46.114293 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:48.115315 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:47.814202 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:50.317009 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:50.614505 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:52.615007 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:52.814237 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:55.313952 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:57.315872 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:55.118719 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:57.615061 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:19:59.321150 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:01.817633 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:00.121505 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:02.616199 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:04.314614 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:06.821447 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:05.116141 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:07.615771 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:09.314885 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:11.814720 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:10.114439 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:12.114934 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:13.814880 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:16.314266 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:14.614605 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:17.114738 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:19.114814 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:18.314400 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:20.814247 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:21.613872 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:23.614413 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:22.814475 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:25.315507 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:25.615575 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:28.115224 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:27.813762 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:30.315217 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:30.115845 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:32.614247 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:32.814740 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:34.815055 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:37.313855 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:34.615147 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:36.615228 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:39.114252 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:39.315097 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:41.320773 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:41.114901 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:43.114934 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:43.813880 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:45.821173 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:45.115138 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:47.616012 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:48.314945 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:50.815712 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:50.114220 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:52.114449 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:54.116064 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:53.314221 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:55.315459 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:57.315797 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:56.615401 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:59.115208 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:20:59.814554 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:01.815293 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:01.614944 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:04.114724 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:04.314591 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:06.814131 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:06.117812 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:08.613951 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:08.814484 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:10.816760 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:10.614218 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:13.117787 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:13.314339 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:15.814531 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:15.616032 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:18.115832 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:17.814737 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:20.314274 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:22.316324 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:20.614885 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:23.115539 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:24.814451 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:26.814788 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:25.615042 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:28.126828 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:29.313629 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:31.315865 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:30.613994 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:32.614187 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:33.814444 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:36.314735 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:34.616225 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:37.114459 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:39.115084 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:38.814252 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:40.815500 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:41.615712 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:44.114282 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:42.817125 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:45.315266 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:46.614996 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:49.114356 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:47.814726 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:50.314778 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:52.315226 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:51.116941 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:53.614660 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:54.818206 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:57.314796 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:55.614733 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:57.614907 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:21:59.315259 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:22:01.315401 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:22:00.119396 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:22:02.616143 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:22:03.813742 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:22:05.814061 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:22:05.114164 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:22:07.615132 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:22:07.814993 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:22:10.314245 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:22:12.315904 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:22:10.115918 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:22:12.615250 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:22:14.815568 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:22:17.319230 1748736 pod_ready.go:103] pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace has status "Ready":"False"
I1212 00:22:17.319259 1748736 pod_ready.go:82] duration metric: took 4m0.011200239s for pod "metrics-server-9975d5f86-4xxgq" in "kube-system" namespace to be "Ready" ...
E1212 00:22:17.319271 1748736 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I1212 00:22:17.319279 1748736 pod_ready.go:39] duration metric: took 5m26.514570873s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1212 00:22:17.319300 1748736 api_server.go:52] waiting for apiserver process to appear ...
I1212 00:22:17.319382 1748736 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1212 00:22:17.341230 1748736 logs.go:282] 2 containers: [b01ba427f07f d8c189ec5293]
I1212 00:22:17.341389 1748736 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1212 00:22:17.361460 1748736 logs.go:282] 2 containers: [c87014549ad3 3339d1ea608b]
I1212 00:22:17.361585 1748736 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1212 00:22:17.387326 1748736 logs.go:282] 2 containers: [26c238c9510b c1eb50b61731]
I1212 00:22:17.387524 1748736 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1212 00:22:17.409542 1748736 logs.go:282] 2 containers: [4fb87ba7f570 e1f79e78d53d]
I1212 00:22:17.409637 1748736 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1212 00:22:17.429723 1748736 logs.go:282] 2 containers: [0bb2da207c94 d2ad7ae21ec1]
I1212 00:22:17.429824 1748736 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1212 00:22:17.449540 1748736 logs.go:282] 2 containers: [d5db19c26736 b9c1c621b89a]
I1212 00:22:17.449626 1748736 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I1212 00:22:17.467290 1748736 logs.go:282] 0 containers: []
W1212 00:22:17.467314 1748736 logs.go:284] No container was found matching "kindnet"
I1212 00:22:17.467372 1748736 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I1212 00:22:17.489543 1748736 logs.go:282] 1 containers: [eb5d61e0470f]
I1212 00:22:17.489625 1748736 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I1212 00:22:17.509704 1748736 logs.go:282] 2 containers: [5567aacfd876 f96d450a47e8]
I1212 00:22:17.509738 1748736 logs.go:123] Gathering logs for kube-scheduler [e1f79e78d53d] ...
I1212 00:22:17.509750 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1f79e78d53d"
I1212 00:22:17.545642 1748736 logs.go:123] Gathering logs for kube-controller-manager [b9c1c621b89a] ...
I1212 00:22:17.545676 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9c1c621b89a"
I1212 00:22:17.588900 1748736 logs.go:123] Gathering logs for storage-provisioner [5567aacfd876] ...
I1212 00:22:17.588940 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5567aacfd876"
I1212 00:22:15.118067 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:22:17.614391 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:22:17.625450 1748736 logs.go:123] Gathering logs for dmesg ...
I1212 00:22:17.625527 1748736 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1212 00:22:17.659538 1748736 logs.go:123] Gathering logs for describe nodes ...
I1212 00:22:17.659616 1748736 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1212 00:22:17.848576 1748736 logs.go:123] Gathering logs for etcd [c87014549ad3] ...
I1212 00:22:17.848608 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87014549ad3"
I1212 00:22:17.879161 1748736 logs.go:123] Gathering logs for container status ...
I1212 00:22:17.879192 1748736 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1212 00:22:17.940589 1748736 logs.go:123] Gathering logs for kube-scheduler [4fb87ba7f570] ...
I1212 00:22:17.940618 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fb87ba7f570"
I1212 00:22:17.970379 1748736 logs.go:123] Gathering logs for kube-controller-manager [d5db19c26736] ...
I1212 00:22:17.970411 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5db19c26736"
I1212 00:22:18.016189 1748736 logs.go:123] Gathering logs for storage-provisioner [f96d450a47e8] ...
I1212 00:22:18.016230 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f96d450a47e8"
I1212 00:22:18.041594 1748736 logs.go:123] Gathering logs for coredns [c1eb50b61731] ...
I1212 00:22:18.041626 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1eb50b61731"
I1212 00:22:18.071309 1748736 logs.go:123] Gathering logs for kube-proxy [0bb2da207c94] ...
I1212 00:22:18.071340 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bb2da207c94"
I1212 00:22:18.098938 1748736 logs.go:123] Gathering logs for kubernetes-dashboard [eb5d61e0470f] ...
I1212 00:22:18.098969 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb5d61e0470f"
I1212 00:22:18.131454 1748736 logs.go:123] Gathering logs for Docker ...
I1212 00:22:18.131482 1748736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I1212 00:22:18.166794 1748736 logs.go:123] Gathering logs for kube-apiserver [d8c189ec5293] ...
I1212 00:22:18.166836 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8c189ec5293"
I1212 00:22:18.243162 1748736 logs.go:123] Gathering logs for etcd [3339d1ea608b] ...
I1212 00:22:18.243196 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3339d1ea608b"
I1212 00:22:18.280133 1748736 logs.go:123] Gathering logs for coredns [26c238c9510b] ...
I1212 00:22:18.280164 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26c238c9510b"
I1212 00:22:18.302243 1748736 logs.go:123] Gathering logs for kubelet ...
I1212 00:22:18.302276 1748736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1212 00:22:18.359731 1748736 logs.go:138] Found kubelet problem: Dec 12 00:16:50 old-k8s-version-385687 kubelet[1416]: E1212 00:16:50.375774 1416 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-385687" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-385687' and this object
W1212 00:22:18.360000 1748736 logs.go:138] Found kubelet problem: Dec 12 00:16:50 old-k8s-version-385687 kubelet[1416]: E1212 00:16:50.397585 1416 reflector.go:138] object-"kube-system"/"coredns-token-rrgdv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-rrgdv" is forbidden: User "system:node:old-k8s-version-385687" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-385687' and this object
W1212 00:22:18.366594 1748736 logs.go:138] Found kubelet problem: Dec 12 00:16:54 old-k8s-version-385687 kubelet[1416]: E1212 00:16:54.000437 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1212 00:22:18.367637 1748736 logs.go:138] Found kubelet problem: Dec 12 00:16:55 old-k8s-version-385687 kubelet[1416]: E1212 00:16:55.428334 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.368523 1748736 logs.go:138] Found kubelet problem: Dec 12 00:16:56 old-k8s-version-385687 kubelet[1416]: E1212 00:16:56.498264 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.370228 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:10 old-k8s-version-385687 kubelet[1416]: E1212 00:17:10.831853 1416 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-n4jdg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-n4jdg" is forbidden: User "system:node:old-k8s-version-385687" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-385687' and this object
W1212 00:22:18.372440 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:11 old-k8s-version-385687 kubelet[1416]: E1212 00:17:11.272479 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1212 00:22:18.377156 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:19 old-k8s-version-385687 kubelet[1416]: E1212 00:17:19.217024 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W1212 00:22:18.377545 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:19 old-k8s-version-385687 kubelet[1416]: E1212 00:17:19.713310 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.377855 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:24 old-k8s-version-385687 kubelet[1416]: E1212 00:17:24.771806 1416 pod_workers.go:191] Error syncing pod c35bb048-c903-47f6-8458-82dac2ca6358 ("storage-provisioner_kube-system(c35bb048-c903-47f6-8458-82dac2ca6358)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c35bb048-c903-47f6-8458-82dac2ca6358)"
W1212 00:22:18.378170 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:25 old-k8s-version-385687 kubelet[1416]: E1212 00:17:25.254131 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.380789 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:35 old-k8s-version-385687 kubelet[1416]: E1212 00:17:35.918222 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W1212 00:22:18.382990 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:40 old-k8s-version-385687 kubelet[1416]: E1212 00:17:40.315954 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1212 00:22:18.383191 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:49 old-k8s-version-385687 kubelet[1416]: E1212 00:17:49.254240 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.383376 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:55 old-k8s-version-385687 kubelet[1416]: E1212 00:17:55.260828 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.385773 1748736 logs.go:138] Found kubelet problem: Dec 12 00:18:03 old-k8s-version-385687 kubelet[1416]: E1212 00:18:03.816532 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W1212 00:22:18.385967 1748736 logs.go:138] Found kubelet problem: Dec 12 00:18:07 old-k8s-version-385687 kubelet[1416]: E1212 00:18:07.253787 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.386167 1748736 logs.go:138] Found kubelet problem: Dec 12 00:18:15 old-k8s-version-385687 kubelet[1416]: E1212 00:18:15.290346 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.388259 1748736 logs.go:138] Found kubelet problem: Dec 12 00:18:21 old-k8s-version-385687 kubelet[1416]: E1212 00:18:21.284205 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1212 00:22:18.388460 1748736 logs.go:138] Found kubelet problem: Dec 12 00:18:28 old-k8s-version-385687 kubelet[1416]: E1212 00:18:28.254710 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.388649 1748736 logs.go:138] Found kubelet problem: Dec 12 00:18:34 old-k8s-version-385687 kubelet[1416]: E1212 00:18:34.260169 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.388851 1748736 logs.go:138] Found kubelet problem: Dec 12 00:18:40 old-k8s-version-385687 kubelet[1416]: E1212 00:18:40.256316 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.389044 1748736 logs.go:138] Found kubelet problem: Dec 12 00:18:47 old-k8s-version-385687 kubelet[1416]: E1212 00:18:47.254188 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.391325 1748736 logs.go:138] Found kubelet problem: Dec 12 00:18:51 old-k8s-version-385687 kubelet[1416]: E1212 00:18:51.919190 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W1212 00:22:18.391559 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:01 old-k8s-version-385687 kubelet[1416]: E1212 00:19:01.253957 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.391760 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:05 old-k8s-version-385687 kubelet[1416]: E1212 00:19:05.254139 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.391944 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:15 old-k8s-version-385687 kubelet[1416]: E1212 00:19:15.254932 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.392160 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:18 old-k8s-version-385687 kubelet[1416]: E1212 00:19:18.268917 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.392344 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:26 old-k8s-version-385687 kubelet[1416]: E1212 00:19:26.254839 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.392541 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:32 old-k8s-version-385687 kubelet[1416]: E1212 00:19:32.255755 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.392726 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:40 old-k8s-version-385687 kubelet[1416]: E1212 00:19:40.257318 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.392923 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:44 old-k8s-version-385687 kubelet[1416]: E1212 00:19:44.254431 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.395023 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:52 old-k8s-version-385687 kubelet[1416]: E1212 00:19:52.286229 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1212 00:22:18.395225 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:56 old-k8s-version-385687 kubelet[1416]: E1212 00:19:56.256770 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.395429 1748736 logs.go:138] Found kubelet problem: Dec 12 00:20:05 old-k8s-version-385687 kubelet[1416]: E1212 00:20:05.253993 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.395626 1748736 logs.go:138] Found kubelet problem: Dec 12 00:20:11 old-k8s-version-385687 kubelet[1416]: E1212 00:20:11.253831 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.395814 1748736 logs.go:138] Found kubelet problem: Dec 12 00:20:16 old-k8s-version-385687 kubelet[1416]: E1212 00:20:16.258580 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.398050 1748736 logs.go:138] Found kubelet problem: Dec 12 00:20:24 old-k8s-version-385687 kubelet[1416]: E1212 00:20:24.840888 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W1212 00:22:18.398238 1748736 logs.go:138] Found kubelet problem: Dec 12 00:20:28 old-k8s-version-385687 kubelet[1416]: E1212 00:20:28.253767 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.398435 1748736 logs.go:138] Found kubelet problem: Dec 12 00:20:39 old-k8s-version-385687 kubelet[1416]: E1212 00:20:39.254296 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.398620 1748736 logs.go:138] Found kubelet problem: Dec 12 00:20:39 old-k8s-version-385687 kubelet[1416]: E1212 00:20:39.255523 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.398820 1748736 logs.go:138] Found kubelet problem: Dec 12 00:20:50 old-k8s-version-385687 kubelet[1416]: E1212 00:20:50.260236 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.399005 1748736 logs.go:138] Found kubelet problem: Dec 12 00:20:54 old-k8s-version-385687 kubelet[1416]: E1212 00:20:54.253607 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.399206 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:04 old-k8s-version-385687 kubelet[1416]: E1212 00:21:04.253856 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.399391 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:09 old-k8s-version-385687 kubelet[1416]: E1212 00:21:09.253849 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.399596 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:17 old-k8s-version-385687 kubelet[1416]: E1212 00:21:17.254163 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.399784 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:20 old-k8s-version-385687 kubelet[1416]: E1212 00:21:20.254162 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.399982 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:28 old-k8s-version-385687 kubelet[1416]: E1212 00:21:28.262765 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.400168 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:34 old-k8s-version-385687 kubelet[1416]: E1212 00:21:34.253963 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.400366 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:41 old-k8s-version-385687 kubelet[1416]: E1212 00:21:41.253839 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.400552 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:46 old-k8s-version-385687 kubelet[1416]: E1212 00:21:46.258966 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.400750 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:52 old-k8s-version-385687 kubelet[1416]: E1212 00:21:52.254181 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.400935 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:59 old-k8s-version-385687 kubelet[1416]: E1212 00:21:59.253931 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.401132 1748736 logs.go:138] Found kubelet problem: Dec 12 00:22:05 old-k8s-version-385687 kubelet[1416]: E1212 00:22:05.253864 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.401316 1748736 logs.go:138] Found kubelet problem: Dec 12 00:22:13 old-k8s-version-385687 kubelet[1416]: E1212 00:22:13.253794 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1212 00:22:18.401326 1748736 logs.go:123] Gathering logs for kube-apiserver [b01ba427f07f] ...
I1212 00:22:18.401340 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b01ba427f07f"
I1212 00:22:18.463548 1748736 logs.go:123] Gathering logs for kube-proxy [d2ad7ae21ec1] ...
I1212 00:22:18.463583 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2ad7ae21ec1"
I1212 00:22:18.489453 1748736 out.go:358] Setting ErrFile to fd 2...
I1212 00:22:18.489480 1748736 out.go:392] TERM=,COLORTERM=, which probably does not support color
W1212 00:22:18.489562 1748736 out.go:270] X Problems detected in kubelet:
W1212 00:22:18.489576 1748736 out.go:270] Dec 12 00:21:46 old-k8s-version-385687 kubelet[1416]: E1212 00:21:46.258966 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.489597 1748736 out.go:270] Dec 12 00:21:52 old-k8s-version-385687 kubelet[1416]: E1212 00:21:52.254181 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.489622 1748736 out.go:270] Dec 12 00:21:59 old-k8s-version-385687 kubelet[1416]: E1212 00:21:59.253931 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.489641 1748736 out.go:270] Dec 12 00:22:05 old-k8s-version-385687 kubelet[1416]: E1212 00:22:05.253864 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:18.489663 1748736 out.go:270] Dec 12 00:22:13 old-k8s-version-385687 kubelet[1416]: E1212 00:22:13.253794 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1212 00:22:18.489669 1748736 out.go:358] Setting ErrFile to fd 2...
I1212 00:22:18.489682 1748736 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1212 00:22:19.615721 1756238 pod_ready.go:103] pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace has status "Ready":"False"
I1212 00:22:20.114297 1756238 pod_ready.go:82] duration metric: took 4m0.006141869s for pod "metrics-server-6867b74b74-kwdt9" in "kube-system" namespace to be "Ready" ...
E1212 00:22:20.114328 1756238 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I1212 00:22:20.114341 1756238 pod_ready.go:39] duration metric: took 4m0.611671957s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1212 00:22:20.114360 1756238 api_server.go:52] waiting for apiserver process to appear ...
I1212 00:22:20.114455 1756238 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1212 00:22:20.138203 1756238 logs.go:282] 2 containers: [31f276b47831 c315d12e06c1]
I1212 00:22:20.138306 1756238 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1212 00:22:20.159398 1756238 logs.go:282] 2 containers: [4f3a72d42d93 08d611f5d6d3]
I1212 00:22:20.159550 1756238 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1212 00:22:20.182106 1756238 logs.go:282] 2 containers: [a73db0c1bbd1 d03d721b1fcc]
I1212 00:22:20.182252 1756238 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1212 00:22:20.205062 1756238 logs.go:282] 2 containers: [c17541d45e35 c900b7f6d85a]
I1212 00:22:20.205196 1756238 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1212 00:22:20.226560 1756238 logs.go:282] 2 containers: [85e128638a89 e122dd9c4811]
I1212 00:22:20.226661 1756238 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1212 00:22:20.249201 1756238 logs.go:282] 2 containers: [c3be126fd28b a443e10d004a]
I1212 00:22:20.249301 1756238 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I1212 00:22:20.274600 1756238 logs.go:282] 0 containers: []
W1212 00:22:20.274624 1756238 logs.go:284] No container was found matching "kindnet"
I1212 00:22:20.274684 1756238 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I1212 00:22:20.305263 1756238 logs.go:282] 2 containers: [90659d963921 ee9b6cea45cd]
I1212 00:22:20.305432 1756238 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I1212 00:22:20.327607 1756238 logs.go:282] 1 containers: [487cb5329d0e]
I1212 00:22:20.327691 1756238 logs.go:123] Gathering logs for kubelet ...
I1212 00:22:20.327718 1756238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1212 00:22:20.420209 1756238 logs.go:123] Gathering logs for kube-apiserver [c315d12e06c1] ...
I1212 00:22:20.420248 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c315d12e06c1"
I1212 00:22:20.479843 1756238 logs.go:123] Gathering logs for kube-scheduler [c900b7f6d85a] ...
I1212 00:22:20.479881 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c900b7f6d85a"
I1212 00:22:20.509557 1756238 logs.go:123] Gathering logs for container status ...
I1212 00:22:20.509589 1756238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1212 00:22:20.574329 1756238 logs.go:123] Gathering logs for describe nodes ...
I1212 00:22:20.574363 1756238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1212 00:22:20.761888 1756238 logs.go:123] Gathering logs for kube-scheduler [c17541d45e35] ...
I1212 00:22:20.761924 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c17541d45e35"
I1212 00:22:20.796381 1756238 logs.go:123] Gathering logs for kube-proxy [85e128638a89] ...
I1212 00:22:20.796410 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85e128638a89"
I1212 00:22:20.821522 1756238 logs.go:123] Gathering logs for kubernetes-dashboard [487cb5329d0e] ...
I1212 00:22:20.821552 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 487cb5329d0e"
I1212 00:22:20.848668 1756238 logs.go:123] Gathering logs for dmesg ...
I1212 00:22:20.848697 1756238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1212 00:22:20.866931 1756238 logs.go:123] Gathering logs for kube-controller-manager [c3be126fd28b] ...
I1212 00:22:20.867019 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3be126fd28b"
I1212 00:22:20.937837 1756238 logs.go:123] Gathering logs for storage-provisioner [ee9b6cea45cd] ...
I1212 00:22:20.937873 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee9b6cea45cd"
I1212 00:22:20.978194 1756238 logs.go:123] Gathering logs for Docker ...
I1212 00:22:20.978224 1756238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I1212 00:22:21.009352 1756238 logs.go:123] Gathering logs for coredns [d03d721b1fcc] ...
I1212 00:22:21.009391 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03d721b1fcc"
I1212 00:22:21.038034 1756238 logs.go:123] Gathering logs for kube-proxy [e122dd9c4811] ...
I1212 00:22:21.038073 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e122dd9c4811"
I1212 00:22:21.063460 1756238 logs.go:123] Gathering logs for kube-controller-manager [a443e10d004a] ...
I1212 00:22:21.063492 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a443e10d004a"
I1212 00:22:21.105039 1756238 logs.go:123] Gathering logs for storage-provisioner [90659d963921] ...
I1212 00:22:21.105087 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90659d963921"
I1212 00:22:21.134920 1756238 logs.go:123] Gathering logs for kube-apiserver [31f276b47831] ...
I1212 00:22:21.134950 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31f276b47831"
I1212 00:22:21.174663 1756238 logs.go:123] Gathering logs for etcd [4f3a72d42d93] ...
I1212 00:22:21.174699 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f3a72d42d93"
I1212 00:22:21.222750 1756238 logs.go:123] Gathering logs for etcd [08d611f5d6d3] ...
I1212 00:22:21.222781 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08d611f5d6d3"
I1212 00:22:21.253581 1756238 logs.go:123] Gathering logs for coredns [a73db0c1bbd1] ...
I1212 00:22:21.253613 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a73db0c1bbd1"
I1212 00:22:23.791624 1756238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1212 00:22:23.804202 1756238 api_server.go:72] duration metric: took 4m10.02774609s to wait for apiserver process to appear ...
I1212 00:22:23.804227 1756238 api_server.go:88] waiting for apiserver healthz status ...
I1212 00:22:23.804327 1756238 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1212 00:22:23.827636 1756238 logs.go:282] 2 containers: [31f276b47831 c315d12e06c1]
I1212 00:22:23.827720 1756238 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1212 00:22:23.847451 1756238 logs.go:282] 2 containers: [4f3a72d42d93 08d611f5d6d3]
I1212 00:22:23.847581 1756238 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1212 00:22:23.867294 1756238 logs.go:282] 2 containers: [a73db0c1bbd1 d03d721b1fcc]
I1212 00:22:23.867378 1756238 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1212 00:22:23.887657 1756238 logs.go:282] 2 containers: [c17541d45e35 c900b7f6d85a]
I1212 00:22:23.887745 1756238 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1212 00:22:23.909033 1756238 logs.go:282] 2 containers: [85e128638a89 e122dd9c4811]
I1212 00:22:23.909120 1756238 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1212 00:22:23.935863 1756238 logs.go:282] 2 containers: [c3be126fd28b a443e10d004a]
I1212 00:22:23.936017 1756238 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I1212 00:22:23.960749 1756238 logs.go:282] 0 containers: []
W1212 00:22:23.960772 1756238 logs.go:284] No container was found matching "kindnet"
I1212 00:22:23.960839 1756238 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I1212 00:22:23.979982 1756238 logs.go:282] 2 containers: [90659d963921 ee9b6cea45cd]
I1212 00:22:23.980138 1756238 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I1212 00:22:24.002567 1756238 logs.go:282] 1 containers: [487cb5329d0e]
I1212 00:22:24.002658 1756238 logs.go:123] Gathering logs for coredns [a73db0c1bbd1] ...
I1212 00:22:24.002686 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a73db0c1bbd1"
I1212 00:22:24.031517 1756238 logs.go:123] Gathering logs for kube-controller-manager [c3be126fd28b] ...
I1212 00:22:24.031553 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3be126fd28b"
I1212 00:22:24.081537 1756238 logs.go:123] Gathering logs for kube-controller-manager [a443e10d004a] ...
I1212 00:22:24.081574 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a443e10d004a"
I1212 00:22:24.124942 1756238 logs.go:123] Gathering logs for container status ...
I1212 00:22:24.124977 1756238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1212 00:22:24.175154 1756238 logs.go:123] Gathering logs for describe nodes ...
I1212 00:22:24.175187 1756238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1212 00:22:24.319706 1756238 logs.go:123] Gathering logs for coredns [d03d721b1fcc] ...
I1212 00:22:24.319737 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03d721b1fcc"
I1212 00:22:24.345427 1756238 logs.go:123] Gathering logs for kube-scheduler [c17541d45e35] ...
I1212 00:22:24.345463 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c17541d45e35"
I1212 00:22:24.370778 1756238 logs.go:123] Gathering logs for kubernetes-dashboard [487cb5329d0e] ...
I1212 00:22:24.370809 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 487cb5329d0e"
I1212 00:22:24.396472 1756238 logs.go:123] Gathering logs for Docker ...
I1212 00:22:24.396504 1756238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I1212 00:22:24.433095 1756238 logs.go:123] Gathering logs for kubelet ...
I1212 00:22:24.433135 1756238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1212 00:22:24.517239 1756238 logs.go:123] Gathering logs for kube-apiserver [c315d12e06c1] ...
I1212 00:22:24.517275 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c315d12e06c1"
I1212 00:22:24.631485 1756238 logs.go:123] Gathering logs for etcd [4f3a72d42d93] ...
I1212 00:22:24.631522 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f3a72d42d93"
I1212 00:22:24.666943 1756238 logs.go:123] Gathering logs for etcd [08d611f5d6d3] ...
I1212 00:22:24.666978 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08d611f5d6d3"
I1212 00:22:24.703908 1756238 logs.go:123] Gathering logs for storage-provisioner [ee9b6cea45cd] ...
I1212 00:22:24.703946 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee9b6cea45cd"
I1212 00:22:24.727392 1756238 logs.go:123] Gathering logs for storage-provisioner [90659d963921] ...
I1212 00:22:24.727469 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90659d963921"
I1212 00:22:24.757615 1756238 logs.go:123] Gathering logs for dmesg ...
I1212 00:22:24.757645 1756238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1212 00:22:24.776961 1756238 logs.go:123] Gathering logs for kube-apiserver [31f276b47831] ...
I1212 00:22:24.776987 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31f276b47831"
I1212 00:22:24.813013 1756238 logs.go:123] Gathering logs for kube-scheduler [c900b7f6d85a] ...
I1212 00:22:24.813048 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c900b7f6d85a"
I1212 00:22:24.855385 1756238 logs.go:123] Gathering logs for kube-proxy [85e128638a89] ...
I1212 00:22:24.855503 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85e128638a89"
I1212 00:22:24.877608 1756238 logs.go:123] Gathering logs for kube-proxy [e122dd9c4811] ...
I1212 00:22:24.877636 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e122dd9c4811"
I1212 00:22:27.401325 1756238 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1212 00:22:27.410618 1756238 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
ok
I1212 00:22:27.411753 1756238 api_server.go:141] control plane version: v1.31.2
I1212 00:22:27.411785 1756238 api_server.go:131] duration metric: took 3.607549736s to wait for apiserver health ...
I1212 00:22:27.411796 1756238 system_pods.go:43] waiting for kube-system pods to appear ...
I1212 00:22:27.411871 1756238 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1212 00:22:27.431357 1756238 logs.go:282] 2 containers: [31f276b47831 c315d12e06c1]
I1212 00:22:27.431485 1756238 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1212 00:22:27.451500 1756238 logs.go:282] 2 containers: [4f3a72d42d93 08d611f5d6d3]
I1212 00:22:27.451587 1756238 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1212 00:22:27.472016 1756238 logs.go:282] 2 containers: [a73db0c1bbd1 d03d721b1fcc]
I1212 00:22:27.472136 1756238 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1212 00:22:27.495197 1756238 logs.go:282] 2 containers: [c17541d45e35 c900b7f6d85a]
I1212 00:22:27.495306 1756238 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1212 00:22:27.515673 1756238 logs.go:282] 2 containers: [85e128638a89 e122dd9c4811]
I1212 00:22:27.515825 1756238 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1212 00:22:27.536257 1756238 logs.go:282] 2 containers: [c3be126fd28b a443e10d004a]
I1212 00:22:27.536378 1756238 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I1212 00:22:27.556308 1756238 logs.go:282] 0 containers: []
W1212 00:22:27.556375 1756238 logs.go:284] No container was found matching "kindnet"
I1212 00:22:27.556449 1756238 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I1212 00:22:27.576111 1756238 logs.go:282] 2 containers: [90659d963921 ee9b6cea45cd]
I1212 00:22:27.576198 1756238 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I1212 00:22:27.596275 1756238 logs.go:282] 1 containers: [487cb5329d0e]
I1212 00:22:27.596358 1756238 logs.go:123] Gathering logs for kube-controller-manager [c3be126fd28b] ...
I1212 00:22:27.596376 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3be126fd28b"
I1212 00:22:27.647717 1756238 logs.go:123] Gathering logs for kube-controller-manager [a443e10d004a] ...
I1212 00:22:27.647756 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a443e10d004a"
I1212 00:22:27.691671 1756238 logs.go:123] Gathering logs for kube-proxy [e122dd9c4811] ...
I1212 00:22:27.691709 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e122dd9c4811"
I1212 00:22:27.723351 1756238 logs.go:123] Gathering logs for coredns [d03d721b1fcc] ...
I1212 00:22:27.723382 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d03d721b1fcc"
I1212 00:22:27.752144 1756238 logs.go:123] Gathering logs for kube-scheduler [c17541d45e35] ...
I1212 00:22:27.752190 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c17541d45e35"
I1212 00:22:27.788201 1756238 logs.go:123] Gathering logs for kubernetes-dashboard [487cb5329d0e] ...
I1212 00:22:27.788234 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 487cb5329d0e"
I1212 00:22:27.814482 1756238 logs.go:123] Gathering logs for Docker ...
I1212 00:22:27.814555 1756238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I1212 00:22:27.849275 1756238 logs.go:123] Gathering logs for kube-apiserver [31f276b47831] ...
I1212 00:22:27.849313 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31f276b47831"
I1212 00:22:27.890514 1756238 logs.go:123] Gathering logs for dmesg ...
I1212 00:22:27.890594 1756238 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1212 00:22:27.914907 1756238 logs.go:123] Gathering logs for etcd [08d611f5d6d3] ...
I1212 00:22:27.914933 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08d611f5d6d3"
I1212 00:22:27.953457 1756238 logs.go:123] Gathering logs for coredns [a73db0c1bbd1] ...
I1212 00:22:27.953496 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a73db0c1bbd1"
I1212 00:22:27.980038 1756238 logs.go:123] Gathering logs for storage-provisioner [ee9b6cea45cd] ...
I1212 00:22:27.980068 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee9b6cea45cd"
I1212 00:22:28.006726 1756238 logs.go:123] Gathering logs for kubelet ...
I1212 00:22:28.006778 1756238 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1212 00:22:28.091197 1756238 logs.go:123] Gathering logs for kube-apiserver [c315d12e06c1] ...
I1212 00:22:28.091234 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c315d12e06c1"
I1212 00:22:28.160809 1756238 logs.go:123] Gathering logs for etcd [4f3a72d42d93] ...
I1212 00:22:28.160847 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f3a72d42d93"
I1212 00:22:28.206121 1756238 logs.go:123] Gathering logs for kube-scheduler [c900b7f6d85a] ...
I1212 00:22:28.206158 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c900b7f6d85a"
I1212 00:22:28.242575 1756238 logs.go:123] Gathering logs for kube-proxy [85e128638a89] ...
I1212 00:22:28.242608 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85e128638a89"
I1212 00:22:28.280964 1756238 logs.go:123] Gathering logs for storage-provisioner [90659d963921] ...
I1212 00:22:28.280993 1756238 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90659d963921"
I1212 00:22:28.304943 1756238 logs.go:123] Gathering logs for container status ...
I1212 00:22:28.304974 1756238 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1212 00:22:28.353481 1756238 logs.go:123] Gathering logs for describe nodes ...
I1212 00:22:28.353513 1756238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1212 00:22:30.996143 1756238 system_pods.go:59] 8 kube-system pods found
I1212 00:22:30.996224 1756238 system_pods.go:61] "coredns-7c65d6cfc9-8pztl" [312f7b44-b849-44a0-b79b-ed4e3aaee051] Running
I1212 00:22:30.996245 1756238 system_pods.go:61] "etcd-no-preload-828307" [2d986771-24ea-43b2-acdb-d0057316bd04] Running
I1212 00:22:30.996266 1756238 system_pods.go:61] "kube-apiserver-no-preload-828307" [adf56c63-9671-4345-b050-fc472725fc2d] Running
I1212 00:22:30.996287 1756238 system_pods.go:61] "kube-controller-manager-no-preload-828307" [eef520d7-4685-44c4-abbc-f089e7c4afe2] Running
I1212 00:22:30.996308 1756238 system_pods.go:61] "kube-proxy-662r7" [e07ed71b-f9bf-4f1e-87c5-0dbcfc42db46] Running
I1212 00:22:30.996328 1756238 system_pods.go:61] "kube-scheduler-no-preload-828307" [e06b90d2-fdc9-4f32-a8ed-59f81e60339e] Running
I1212 00:22:30.996352 1756238 system_pods.go:61] "metrics-server-6867b74b74-kwdt9" [e349eeac-e1a4-41c7-b844-8024218f6cc1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1212 00:22:30.996382 1756238 system_pods.go:61] "storage-provisioner" [c849ebed-df0c-4949-833d-6d285597bd61] Running
I1212 00:22:30.996405 1756238 system_pods.go:74] duration metric: took 3.584602829s to wait for pod list to return data ...
I1212 00:22:30.996425 1756238 default_sa.go:34] waiting for default service account to be created ...
I1212 00:22:30.999386 1756238 default_sa.go:45] found service account: "default"
I1212 00:22:30.999453 1756238 default_sa.go:55] duration metric: took 3.007087ms for default service account to be created ...
I1212 00:22:30.999464 1756238 system_pods.go:116] waiting for k8s-apps to be running ...
I1212 00:22:31.005261 1756238 system_pods.go:86] 8 kube-system pods found
I1212 00:22:31.005299 1756238 system_pods.go:89] "coredns-7c65d6cfc9-8pztl" [312f7b44-b849-44a0-b79b-ed4e3aaee051] Running
I1212 00:22:31.005307 1756238 system_pods.go:89] "etcd-no-preload-828307" [2d986771-24ea-43b2-acdb-d0057316bd04] Running
I1212 00:22:31.005313 1756238 system_pods.go:89] "kube-apiserver-no-preload-828307" [adf56c63-9671-4345-b050-fc472725fc2d] Running
I1212 00:22:31.005321 1756238 system_pods.go:89] "kube-controller-manager-no-preload-828307" [eef520d7-4685-44c4-abbc-f089e7c4afe2] Running
I1212 00:22:31.005326 1756238 system_pods.go:89] "kube-proxy-662r7" [e07ed71b-f9bf-4f1e-87c5-0dbcfc42db46] Running
I1212 00:22:31.005330 1756238 system_pods.go:89] "kube-scheduler-no-preload-828307" [e06b90d2-fdc9-4f32-a8ed-59f81e60339e] Running
I1212 00:22:31.005339 1756238 system_pods.go:89] "metrics-server-6867b74b74-kwdt9" [e349eeac-e1a4-41c7-b844-8024218f6cc1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1212 00:22:31.005345 1756238 system_pods.go:89] "storage-provisioner" [c849ebed-df0c-4949-833d-6d285597bd61] Running
I1212 00:22:31.005367 1756238 system_pods.go:126] duration metric: took 5.895531ms to wait for k8s-apps to be running ...
I1212 00:22:31.005375 1756238 system_svc.go:44] waiting for kubelet service to be running ....
I1212 00:22:31.005446 1756238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1212 00:22:31.018968 1756238 system_svc.go:56] duration metric: took 13.583386ms WaitForService to wait for kubelet
I1212 00:22:31.018999 1756238 kubeadm.go:582] duration metric: took 4m17.242548798s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1212 00:22:31.019019 1756238 node_conditions.go:102] verifying NodePressure condition ...
I1212 00:22:31.024384 1756238 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
I1212 00:22:31.024417 1756238 node_conditions.go:123] node cpu capacity is 2
I1212 00:22:31.024430 1756238 node_conditions.go:105] duration metric: took 5.405148ms to run NodePressure ...
I1212 00:22:31.024444 1756238 start.go:241] waiting for startup goroutines ...
I1212 00:22:31.024451 1756238 start.go:246] waiting for cluster config update ...
I1212 00:22:31.024463 1756238 start.go:255] writing updated cluster config ...
I1212 00:22:31.024777 1756238 ssh_runner.go:195] Run: rm -f paused
I1212 00:22:31.088401 1756238 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
I1212 00:22:31.093661 1756238 out.go:177] * Done! kubectl is now configured to use "no-preload-828307" cluster and "default" namespace by default
I1212 00:22:28.491369 1748736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1212 00:22:28.504527 1748736 api_server.go:72] duration metric: took 5m55.339563551s to wait for apiserver process to appear ...
I1212 00:22:28.504579 1748736 api_server.go:88] waiting for apiserver healthz status ...
I1212 00:22:28.504658 1748736 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1212 00:22:28.524975 1748736 logs.go:282] 2 containers: [b01ba427f07f d8c189ec5293]
I1212 00:22:28.525057 1748736 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1212 00:22:28.544779 1748736 logs.go:282] 2 containers: [c87014549ad3 3339d1ea608b]
I1212 00:22:28.544877 1748736 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1212 00:22:28.565064 1748736 logs.go:282] 2 containers: [26c238c9510b c1eb50b61731]
I1212 00:22:28.565150 1748736 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1212 00:22:28.585526 1748736 logs.go:282] 2 containers: [4fb87ba7f570 e1f79e78d53d]
I1212 00:22:28.585614 1748736 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1212 00:22:28.605266 1748736 logs.go:282] 2 containers: [0bb2da207c94 d2ad7ae21ec1]
I1212 00:22:28.605350 1748736 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1212 00:22:28.631922 1748736 logs.go:282] 2 containers: [d5db19c26736 b9c1c621b89a]
I1212 00:22:28.632009 1748736 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I1212 00:22:28.653619 1748736 logs.go:282] 0 containers: []
W1212 00:22:28.653643 1748736 logs.go:284] No container was found matching "kindnet"
I1212 00:22:28.653706 1748736 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I1212 00:22:28.673901 1748736 logs.go:282] 2 containers: [5567aacfd876 f96d450a47e8]
I1212 00:22:28.673984 1748736 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I1212 00:22:28.701072 1748736 logs.go:282] 1 containers: [eb5d61e0470f]
I1212 00:22:28.701103 1748736 logs.go:123] Gathering logs for dmesg ...
I1212 00:22:28.701116 1748736 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1212 00:22:28.726410 1748736 logs.go:123] Gathering logs for etcd [3339d1ea608b] ...
I1212 00:22:28.726438 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3339d1ea608b"
I1212 00:22:28.760916 1748736 logs.go:123] Gathering logs for kube-controller-manager [d5db19c26736] ...
I1212 00:22:28.760954 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5db19c26736"
I1212 00:22:28.827751 1748736 logs.go:123] Gathering logs for kubernetes-dashboard [eb5d61e0470f] ...
I1212 00:22:28.827784 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb5d61e0470f"
I1212 00:22:28.851303 1748736 logs.go:123] Gathering logs for kubelet ...
I1212 00:22:28.851332 1748736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1212 00:22:28.920803 1748736 logs.go:138] Found kubelet problem: Dec 12 00:16:50 old-k8s-version-385687 kubelet[1416]: E1212 00:16:50.375774 1416 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-385687" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-385687' and this object
W1212 00:22:28.921062 1748736 logs.go:138] Found kubelet problem: Dec 12 00:16:50 old-k8s-version-385687 kubelet[1416]: E1212 00:16:50.397585 1416 reflector.go:138] object-"kube-system"/"coredns-token-rrgdv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-rrgdv" is forbidden: User "system:node:old-k8s-version-385687" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-385687' and this object
W1212 00:22:28.927356 1748736 logs.go:138] Found kubelet problem: Dec 12 00:16:54 old-k8s-version-385687 kubelet[1416]: E1212 00:16:54.000437 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1212 00:22:28.928386 1748736 logs.go:138] Found kubelet problem: Dec 12 00:16:55 old-k8s-version-385687 kubelet[1416]: E1212 00:16:55.428334 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.929224 1748736 logs.go:138] Found kubelet problem: Dec 12 00:16:56 old-k8s-version-385687 kubelet[1416]: E1212 00:16:56.498264 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.930856 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:10 old-k8s-version-385687 kubelet[1416]: E1212 00:17:10.831853 1416 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-n4jdg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-n4jdg" is forbidden: User "system:node:old-k8s-version-385687" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-385687' and this object
W1212 00:22:28.933037 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:11 old-k8s-version-385687 kubelet[1416]: E1212 00:17:11.272479 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1212 00:22:28.937597 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:19 old-k8s-version-385687 kubelet[1416]: E1212 00:17:19.217024 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W1212 00:22:28.937974 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:19 old-k8s-version-385687 kubelet[1416]: E1212 00:17:19.713310 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.938283 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:24 old-k8s-version-385687 kubelet[1416]: E1212 00:17:24.771806 1416 pod_workers.go:191] Error syncing pod c35bb048-c903-47f6-8458-82dac2ca6358 ("storage-provisioner_kube-system(c35bb048-c903-47f6-8458-82dac2ca6358)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c35bb048-c903-47f6-8458-82dac2ca6358)"
W1212 00:22:28.938597 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:25 old-k8s-version-385687 kubelet[1416]: E1212 00:17:25.254131 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.941196 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:35 old-k8s-version-385687 kubelet[1416]: E1212 00:17:35.918222 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W1212 00:22:28.943458 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:40 old-k8s-version-385687 kubelet[1416]: E1212 00:17:40.315954 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1212 00:22:28.943664 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:49 old-k8s-version-385687 kubelet[1416]: E1212 00:17:49.254240 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.943853 1748736 logs.go:138] Found kubelet problem: Dec 12 00:17:55 old-k8s-version-385687 kubelet[1416]: E1212 00:17:55.260828 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.946074 1748736 logs.go:138] Found kubelet problem: Dec 12 00:18:03 old-k8s-version-385687 kubelet[1416]: E1212 00:18:03.816532 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W1212 00:22:28.946258 1748736 logs.go:138] Found kubelet problem: Dec 12 00:18:07 old-k8s-version-385687 kubelet[1416]: E1212 00:18:07.253787 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.946457 1748736 logs.go:138] Found kubelet problem: Dec 12 00:18:15 old-k8s-version-385687 kubelet[1416]: E1212 00:18:15.290346 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.948521 1748736 logs.go:138] Found kubelet problem: Dec 12 00:18:21 old-k8s-version-385687 kubelet[1416]: E1212 00:18:21.284205 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1212 00:22:28.948723 1748736 logs.go:138] Found kubelet problem: Dec 12 00:18:28 old-k8s-version-385687 kubelet[1416]: E1212 00:18:28.254710 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.948910 1748736 logs.go:138] Found kubelet problem: Dec 12 00:18:34 old-k8s-version-385687 kubelet[1416]: E1212 00:18:34.260169 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.949108 1748736 logs.go:138] Found kubelet problem: Dec 12 00:18:40 old-k8s-version-385687 kubelet[1416]: E1212 00:18:40.256316 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.949292 1748736 logs.go:138] Found kubelet problem: Dec 12 00:18:47 old-k8s-version-385687 kubelet[1416]: E1212 00:18:47.254188 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.951520 1748736 logs.go:138] Found kubelet problem: Dec 12 00:18:51 old-k8s-version-385687 kubelet[1416]: E1212 00:18:51.919190 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W1212 00:22:28.951708 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:01 old-k8s-version-385687 kubelet[1416]: E1212 00:19:01.253957 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.951904 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:05 old-k8s-version-385687 kubelet[1416]: E1212 00:19:05.254139 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.952088 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:15 old-k8s-version-385687 kubelet[1416]: E1212 00:19:15.254932 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.952285 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:18 old-k8s-version-385687 kubelet[1416]: E1212 00:19:18.268917 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.952470 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:26 old-k8s-version-385687 kubelet[1416]: E1212 00:19:26.254839 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.952666 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:32 old-k8s-version-385687 kubelet[1416]: E1212 00:19:32.255755 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.952849 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:40 old-k8s-version-385687 kubelet[1416]: E1212 00:19:40.257318 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.953045 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:44 old-k8s-version-385687 kubelet[1416]: E1212 00:19:44.254431 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.955100 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:52 old-k8s-version-385687 kubelet[1416]: E1212 00:19:52.286229 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1212 00:22:28.955296 1748736 logs.go:138] Found kubelet problem: Dec 12 00:19:56 old-k8s-version-385687 kubelet[1416]: E1212 00:19:56.256770 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.955507 1748736 logs.go:138] Found kubelet problem: Dec 12 00:20:05 old-k8s-version-385687 kubelet[1416]: E1212 00:20:05.253993 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.955711 1748736 logs.go:138] Found kubelet problem: Dec 12 00:20:11 old-k8s-version-385687 kubelet[1416]: E1212 00:20:11.253831 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.955901 1748736 logs.go:138] Found kubelet problem: Dec 12 00:20:16 old-k8s-version-385687 kubelet[1416]: E1212 00:20:16.258580 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.958141 1748736 logs.go:138] Found kubelet problem: Dec 12 00:20:24 old-k8s-version-385687 kubelet[1416]: E1212 00:20:24.840888 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W1212 00:22:28.958327 1748736 logs.go:138] Found kubelet problem: Dec 12 00:20:28 old-k8s-version-385687 kubelet[1416]: E1212 00:20:28.253767 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.958541 1748736 logs.go:138] Found kubelet problem: Dec 12 00:20:39 old-k8s-version-385687 kubelet[1416]: E1212 00:20:39.254296 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.958730 1748736 logs.go:138] Found kubelet problem: Dec 12 00:20:39 old-k8s-version-385687 kubelet[1416]: E1212 00:20:39.255523 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.958934 1748736 logs.go:138] Found kubelet problem: Dec 12 00:20:50 old-k8s-version-385687 kubelet[1416]: E1212 00:20:50.260236 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.959119 1748736 logs.go:138] Found kubelet problem: Dec 12 00:20:54 old-k8s-version-385687 kubelet[1416]: E1212 00:20:54.253607 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.959315 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:04 old-k8s-version-385687 kubelet[1416]: E1212 00:21:04.253856 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.959504 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:09 old-k8s-version-385687 kubelet[1416]: E1212 00:21:09.253849 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.959703 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:17 old-k8s-version-385687 kubelet[1416]: E1212 00:21:17.254163 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.959904 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:20 old-k8s-version-385687 kubelet[1416]: E1212 00:21:20.254162 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.960106 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:28 old-k8s-version-385687 kubelet[1416]: E1212 00:21:28.262765 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.960292 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:34 old-k8s-version-385687 kubelet[1416]: E1212 00:21:34.253963 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.960490 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:41 old-k8s-version-385687 kubelet[1416]: E1212 00:21:41.253839 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.960676 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:46 old-k8s-version-385687 kubelet[1416]: E1212 00:21:46.258966 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.960874 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:52 old-k8s-version-385687 kubelet[1416]: E1212 00:21:52.254181 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.961060 1748736 logs.go:138] Found kubelet problem: Dec 12 00:21:59 old-k8s-version-385687 kubelet[1416]: E1212 00:21:59.253931 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.961258 1748736 logs.go:138] Found kubelet problem: Dec 12 00:22:05 old-k8s-version-385687 kubelet[1416]: E1212 00:22:05.253864 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.961443 1748736 logs.go:138] Found kubelet problem: Dec 12 00:22:13 old-k8s-version-385687 kubelet[1416]: E1212 00:22:13.253794 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.961641 1748736 logs.go:138] Found kubelet problem: Dec 12 00:22:19 old-k8s-version-385687 kubelet[1416]: E1212 00:22:19.253937 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:28.961827 1748736 logs.go:138] Found kubelet problem: Dec 12 00:22:28 old-k8s-version-385687 kubelet[1416]: E1212 00:22:28.255844 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1212 00:22:28.961837 1748736 logs.go:123] Gathering logs for describe nodes ...
I1212 00:22:28.961852 1748736 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1212 00:22:29.104804 1748736 logs.go:123] Gathering logs for kube-apiserver [b01ba427f07f] ...
I1212 00:22:29.104833 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b01ba427f07f"
I1212 00:22:29.147972 1748736 logs.go:123] Gathering logs for kube-apiserver [d8c189ec5293] ...
I1212 00:22:29.148008 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8c189ec5293"
I1212 00:22:29.213257 1748736 logs.go:123] Gathering logs for kube-controller-manager [b9c1c621b89a] ...
I1212 00:22:29.213294 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9c1c621b89a"
I1212 00:22:29.256200 1748736 logs.go:123] Gathering logs for storage-provisioner [f96d450a47e8] ...
I1212 00:22:29.256293 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f96d450a47e8"
I1212 00:22:29.280038 1748736 logs.go:123] Gathering logs for container status ...
I1212 00:22:29.280110 1748736 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1212 00:22:29.348232 1748736 logs.go:123] Gathering logs for etcd [c87014549ad3] ...
I1212 00:22:29.348264 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c87014549ad3"
I1212 00:22:29.375100 1748736 logs.go:123] Gathering logs for coredns [c1eb50b61731] ...
I1212 00:22:29.375132 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1eb50b61731"
I1212 00:22:29.401243 1748736 logs.go:123] Gathering logs for coredns [26c238c9510b] ...
I1212 00:22:29.401271 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26c238c9510b"
I1212 00:22:29.424125 1748736 logs.go:123] Gathering logs for kube-scheduler [4fb87ba7f570] ...
I1212 00:22:29.424153 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fb87ba7f570"
I1212 00:22:29.447967 1748736 logs.go:123] Gathering logs for kube-scheduler [e1f79e78d53d] ...
I1212 00:22:29.447996 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1f79e78d53d"
I1212 00:22:29.473912 1748736 logs.go:123] Gathering logs for kube-proxy [0bb2da207c94] ...
I1212 00:22:29.473945 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0bb2da207c94"
I1212 00:22:29.496691 1748736 logs.go:123] Gathering logs for kube-proxy [d2ad7ae21ec1] ...
I1212 00:22:29.496719 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2ad7ae21ec1"
I1212 00:22:29.519367 1748736 logs.go:123] Gathering logs for storage-provisioner [5567aacfd876] ...
I1212 00:22:29.519395 1748736 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5567aacfd876"
I1212 00:22:29.541417 1748736 logs.go:123] Gathering logs for Docker ...
I1212 00:22:29.541444 1748736 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I1212 00:22:29.568180 1748736 out.go:358] Setting ErrFile to fd 2...
I1212 00:22:29.568210 1748736 out.go:392] TERM=,COLORTERM=, which probably does not support color
W1212 00:22:29.568286 1748736 out.go:270] X Problems detected in kubelet:
W1212 00:22:29.568303 1748736 out.go:270] Dec 12 00:21:59 old-k8s-version-385687 kubelet[1416]: E1212 00:21:59.253931 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:29.568309 1748736 out.go:270] Dec 12 00:22:05 old-k8s-version-385687 kubelet[1416]: E1212 00:22:05.253864 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:29.568335 1748736 out.go:270] Dec 12 00:22:13 old-k8s-version-385687 kubelet[1416]: E1212 00:22:13.253794 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1212 00:22:29.568352 1748736 out.go:270] Dec 12 00:22:19 old-k8s-version-385687 kubelet[1416]: E1212 00:22:19.253937 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W1212 00:22:29.568357 1748736 out.go:270] Dec 12 00:22:28 old-k8s-version-385687 kubelet[1416]: E1212 00:22:28.255844 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1212 00:22:29.568365 1748736 out.go:358] Setting ErrFile to fd 2...
I1212 00:22:29.568373 1748736 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1212 00:22:39.569398 1748736 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I1212 00:22:39.586317 1748736 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I1212 00:22:39.589497 1748736 out.go:201]
W1212 00:22:39.592417 1748736 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W1212 00:22:39.592460 1748736 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
W1212 00:22:39.592479 1748736 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
W1212 00:22:39.592491 1748736 out.go:270] *
W1212 00:22:39.593441 1748736 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1212 00:22:39.595524 1748736 out.go:201]
==> Docker <==
Dec 12 00:17:35 old-k8s-version-385687 dockerd[1106]: time="2024-12-12T00:17:35.914847123Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" spanID=5db759932ea9c76d traceID=ae14e01724d1ccdd4038ceb4e07dc655
Dec 12 00:17:40 old-k8s-version-385687 dockerd[1106]: time="2024-12-12T00:17:40.295713368Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" spanID=4b5eb68311271385 traceID=43ee5f7564541d0ac8ae7b5cd4de3878
Dec 12 00:17:40 old-k8s-version-385687 dockerd[1106]: time="2024-12-12T00:17:40.295880798Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" spanID=4b5eb68311271385 traceID=43ee5f7564541d0ac8ae7b5cd4de3878
Dec 12 00:17:40 old-k8s-version-385687 dockerd[1106]: time="2024-12-12T00:17:40.306313239Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" spanID=4b5eb68311271385 traceID=43ee5f7564541d0ac8ae7b5cd4de3878
Dec 12 00:18:03 old-k8s-version-385687 dockerd[1106]: time="2024-12-12T00:18:03.606619513Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4" spanID=ec37b3e899a9311b traceID=43b24b84fb29ce28d9de73d18d3137ff
Dec 12 00:18:03 old-k8s-version-385687 dockerd[1106]: time="2024-12-12T00:18:03.811984835Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4" spanID=ec37b3e899a9311b traceID=43b24b84fb29ce28d9de73d18d3137ff
Dec 12 00:18:03 old-k8s-version-385687 dockerd[1106]: time="2024-12-12T00:18:03.812321073Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4" spanID=ec37b3e899a9311b traceID=43b24b84fb29ce28d9de73d18d3137ff
Dec 12 00:18:03 old-k8s-version-385687 dockerd[1106]: time="2024-12-12T00:18:03.812742674Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" spanID=ec37b3e899a9311b traceID=43b24b84fb29ce28d9de73d18d3137ff
Dec 12 00:18:21 old-k8s-version-385687 dockerd[1106]: time="2024-12-12T00:18:21.280293143Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" spanID=5bc467b5b412d9dc traceID=494ee3c43138ac5e81d2820f3d6d8962
Dec 12 00:18:21 old-k8s-version-385687 dockerd[1106]: time="2024-12-12T00:18:21.280788442Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" spanID=5bc467b5b412d9dc traceID=494ee3c43138ac5e81d2820f3d6d8962
Dec 12 00:18:21 old-k8s-version-385687 dockerd[1106]: time="2024-12-12T00:18:21.283513173Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" spanID=5bc467b5b412d9dc traceID=494ee3c43138ac5e81d2820f3d6d8962
Dec 12 00:18:51 old-k8s-version-385687 dockerd[1106]: time="2024-12-12T00:18:51.607638650Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4" spanID=690c5dcad2a026a1 traceID=68dd9d235a6d7e8dd804821bf0a74c77
Dec 12 00:18:51 old-k8s-version-385687 dockerd[1106]: time="2024-12-12T00:18:51.915588346Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4" spanID=690c5dcad2a026a1 traceID=68dd9d235a6d7e8dd804821bf0a74c77
Dec 12 00:18:51 old-k8s-version-385687 dockerd[1106]: time="2024-12-12T00:18:51.915734672Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4" spanID=690c5dcad2a026a1 traceID=68dd9d235a6d7e8dd804821bf0a74c77
Dec 12 00:18:51 old-k8s-version-385687 dockerd[1106]: time="2024-12-12T00:18:51.915777690Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" spanID=690c5dcad2a026a1 traceID=68dd9d235a6d7e8dd804821bf0a74c77
Dec 12 00:19:52 old-k8s-version-385687 dockerd[1106]: time="2024-12-12T00:19:52.282484723Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" spanID=73763bc570ce34fa traceID=3c8bc11a385b150a278d73a75b97b469
Dec 12 00:19:52 old-k8s-version-385687 dockerd[1106]: time="2024-12-12T00:19:52.282540032Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" spanID=73763bc570ce34fa traceID=3c8bc11a385b150a278d73a75b97b469
Dec 12 00:19:52 old-k8s-version-385687 dockerd[1106]: time="2024-12-12T00:19:52.285534886Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" spanID=73763bc570ce34fa traceID=3c8bc11a385b150a278d73a75b97b469
Dec 12 00:20:24 old-k8s-version-385687 dockerd[1106]: time="2024-12-12T00:20:24.619649404Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4" spanID=6c1c260da84475e9 traceID=a66d92ddd1bf54f44f3c66f3effe8d75
Dec 12 00:20:24 old-k8s-version-385687 dockerd[1106]: time="2024-12-12T00:20:24.837055232Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4" spanID=6c1c260da84475e9 traceID=a66d92ddd1bf54f44f3c66f3effe8d75
Dec 12 00:20:24 old-k8s-version-385687 dockerd[1106]: time="2024-12-12T00:20:24.837455214Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4" spanID=6c1c260da84475e9 traceID=a66d92ddd1bf54f44f3c66f3effe8d75
Dec 12 00:20:24 old-k8s-version-385687 dockerd[1106]: time="2024-12-12T00:20:24.837637051Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" spanID=6c1c260da84475e9 traceID=a66d92ddd1bf54f44f3c66f3effe8d75
Dec 12 00:22:40 old-k8s-version-385687 dockerd[1106]: time="2024-12-12T00:22:40.277662252Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" spanID=d420fdb9be0c82e6 traceID=3dbe66cc0e88ffa244fd0c281dd63ba4
Dec 12 00:22:40 old-k8s-version-385687 dockerd[1106]: time="2024-12-12T00:22:40.277715575Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" spanID=d420fdb9be0c82e6 traceID=3dbe66cc0e88ffa244fd0c281dd63ba4
Dec 12 00:22:40 old-k8s-version-385687 dockerd[1106]: time="2024-12-12T00:22:40.280778645Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" spanID=d420fdb9be0c82e6 traceID=3dbe66cc0e88ffa244fd0c281dd63ba4
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
5567aacfd876e ba04bb24b9575 5 minutes ago Running storage-provisioner 2 6aec8dfdfe7a7 storage-provisioner
eb5d61e0470f3 kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 5 minutes ago Running kubernetes-dashboard 0 2ab731971f4d5 kubernetes-dashboard-cd95d586-njjqg
eaaf0602e1cf0 1611cd07b61d5 5 minutes ago Running busybox 1 58db0495a17f8 busybox
26c238c9510b2 db91994f4ee8f 5 minutes ago Running coredns 1 6a943a6cb0d54 coredns-74ff55c5b-w5mlv
0bb2da207c94b 25a5233254979 5 minutes ago Running kube-proxy 1 d2e14bc02916d kube-proxy-dg295
f96d450a47e8b ba04bb24b9575 5 minutes ago Exited storage-provisioner 1 6aec8dfdfe7a7 storage-provisioner
4fb87ba7f5705 e7605f88f17d6 6 minutes ago Running kube-scheduler 1 0ba1ca23ed40f kube-scheduler-old-k8s-version-385687
d5db19c267363 1df8a2b116bd1 6 minutes ago Running kube-controller-manager 1 4b58e85030ae9 kube-controller-manager-old-k8s-version-385687
b01ba427f07f6 2c08bbbc02d3a 6 minutes ago Running kube-apiserver 1 2b0077362b993 kube-apiserver-old-k8s-version-385687
c87014549ad31 05b738aa1bc63 6 minutes ago Running etcd 1 9bb6e33c766ba etcd-old-k8s-version-385687
904adc769c70f gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 6 minutes ago Exited busybox 0 fa7faaa04dbaf busybox
c1eb50b61731d db91994f4ee8f 8 minutes ago Exited coredns 0 f7f0ef0242c8d coredns-74ff55c5b-w5mlv
d2ad7ae21ec19 25a5233254979 8 minutes ago Exited kube-proxy 0 48192985904ff kube-proxy-dg295
e1f79e78d53d1 e7605f88f17d6 8 minutes ago Exited kube-scheduler 0 fea757b5d2428 kube-scheduler-old-k8s-version-385687
b9c1c621b89a5 1df8a2b116bd1 8 minutes ago Exited kube-controller-manager 0 9d37c4dd1e55a kube-controller-manager-old-k8s-version-385687
d8c189ec52938 2c08bbbc02d3a 8 minutes ago Exited kube-apiserver 0 74d2876311f8d kube-apiserver-old-k8s-version-385687
3339d1ea608bd 05b738aa1bc63 8 minutes ago Exited etcd 0 f0f7a9655f3e0 etcd-old-k8s-version-385687
==> coredns [26c238c9510b] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:44493 - 18012 "HINFO IN 7698498961508969331.7943015537628632386. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.061274667s
==> coredns [c1eb50b61731] <==
E1212 00:16:11.087199 1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=565&timeout=8m26s&timeoutSeconds=506&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
E1212 00:16:11.087341 1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=200&timeout=7m7s&timeoutSeconds=427&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
E1212 00:16:11.087631 1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?allowWatchBookmarks=true&resourceVersion=570&timeout=5m19s&timeoutSeconds=319&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
.:53
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:44251 - 54359 "HINFO IN 7128351819804842112.4513004474882817050. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013385269s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> describe nodes <==
Name: old-k8s-version-385687
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=old-k8s-version-385687
kubernetes.io/os=linux
minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458
minikube.k8s.io/name=old-k8s-version-385687
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_12_12T00_14_21_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 12 Dec 2024 00:14:17 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: old-k8s-version-385687
AcquireTime: <unset>
RenewTime: Thu, 12 Dec 2024 00:22:32 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Thu, 12 Dec 2024 00:17:42 +0000 Thu, 12 Dec 2024 00:14:11 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 12 Dec 2024 00:17:42 +0000 Thu, 12 Dec 2024 00:14:11 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 12 Dec 2024 00:17:42 +0000 Thu, 12 Dec 2024 00:14:11 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 12 Dec 2024 00:17:42 +0000 Thu, 12 Dec 2024 00:14:34 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.76.2
Hostname: old-k8s-version-385687
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022300Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022300Ki
pods: 110
System Info:
Machine ID: e2c96e69450944ca9587e2b0f66fa166
System UUID: 1897ea70-b1fb-465d-a098-eb61e541a363
Boot ID: cbdc66c9-3f6a-4d5a-983b-1113a23b205f
Kernel Version: 5.15.0-1072-aws
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: docker://27.4.0
Kubelet Version: v1.20.0
Kube-Proxy Version: v1.20.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (11 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m43s
kube-system coredns-74ff55c5b-w5mlv 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 8m5s
kube-system etcd-old-k8s-version-385687 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 8m16s
kube-system kube-apiserver-old-k8s-version-385687 250m (12%) 0 (0%) 0 (0%) 0 (0%) 8m16s
kube-system kube-controller-manager-old-k8s-version-385687 200m (10%) 0 (0%) 0 (0%) 0 (0%) 8m16s
kube-system kube-proxy-dg295 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m5s
kube-system kube-scheduler-old-k8s-version-385687 100m (5%) 0 (0%) 0 (0%) 0 (0%) 8m16s
kube-system metrics-server-9975d5f86-4xxgq 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 6m31s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m3s
kubernetes-dashboard dashboard-metrics-scraper-8d5bb5db8-cvhql 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m31s
kubernetes-dashboard kubernetes-dashboard-cd95d586-njjqg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m31s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 370Mi (4%) 170Mi (2%)
ephemeral-storage 100Mi (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 8m31s (x5 over 8m31s) kubelet Node old-k8s-version-385687 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m31s (x4 over 8m31s) kubelet Node old-k8s-version-385687 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m31s (x3 over 8m31s) kubelet Node old-k8s-version-385687 status is now: NodeHasSufficientPID
Normal Starting 8m17s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 8m17s kubelet Node old-k8s-version-385687 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m17s kubelet Node old-k8s-version-385687 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m17s kubelet Node old-k8s-version-385687 status is now: NodeHasSufficientPID
Normal NodeNotReady 8m17s kubelet Node old-k8s-version-385687 status is now: NodeNotReady
Normal NodeAllocatableEnforced 8m17s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 8m7s kubelet Node old-k8s-version-385687 status is now: NodeReady
Normal Starting 8m4s kube-proxy Starting kube-proxy.
Normal Starting 6m5s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 6m5s (x8 over 6m5s) kubelet Node old-k8s-version-385687 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 6m5s (x8 over 6m5s) kubelet Node old-k8s-version-385687 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 6m5s (x7 over 6m5s) kubelet Node old-k8s-version-385687 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 6m5s kubelet Updated Node Allocatable limit across pods
Normal Starting 5m46s kube-proxy Starting kube-proxy.
==> dmesg <==
==> etcd [3339d1ea608b] <==
raft2024/12/12 00:14:11 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
2024-12-12 00:14:11.719184 I | etcdserver: setting up the initial cluster version to 3.4
2024-12-12 00:14:11.721344 N | etcdserver/membership: set the initial cluster version to 3.4
2024-12-12 00:14:11.721567 I | etcdserver/api: enabled capabilities for version 3.4
2024-12-12 00:14:11.721715 I | etcdserver: published {Name:old-k8s-version-385687 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
2024-12-12 00:14:11.722000 I | embed: ready to serve client requests
2024-12-12 00:14:11.723665 I | embed: serving client requests on 192.168.76.2:2379
2024-12-12 00:14:11.724083 I | embed: ready to serve client requests
2024-12-12 00:14:11.729109 I | embed: serving client requests on 127.0.0.1:2379
2024-12-12 00:14:25.080458 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-12 00:14:25.337741 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-12 00:14:35.337619 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-12 00:14:45.337703 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-12 00:14:55.337577 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-12 00:15:05.337651 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-12 00:15:15.337497 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-12 00:15:25.337763 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-12 00:15:35.337699 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-12 00:15:45.337637 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-12 00:15:55.337679 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-12 00:16:05.337619 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-12 00:16:11.178864 N | pkg/osutil: received terminated signal, shutting down...
WARNING: 2024/12/12 00:16:11 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
2024-12-12 00:16:11.240747 I | etcdserver: skipped leadership transfer for single voting member cluster
WARNING: 2024/12/12 00:16:11 grpc: addrConn.createTransport failed to connect to {192.168.76.2:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 192.168.76.2:2379: operation was canceled". Reconnecting...
==> etcd [c87014549ad3] <==
2024-12-12 00:18:38.919738 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-12 00:18:48.919805 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-12 00:18:58.919732 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-12 00:19:08.919856 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-12 00:19:18.919730 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-12 00:19:28.919877 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-12 00:19:38.919838 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-12 00:19:48.919844 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-12 00:19:58.919896 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-12 00:20:08.919726 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-12 00:20:18.919782 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-12 00:20:28.919986 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-12 00:20:38.919784 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-12 00:20:48.920094 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-12 00:20:58.919798 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-12 00:21:08.919887 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-12 00:21:18.919905 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-12 00:21:28.919727 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-12 00:21:38.919799 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-12 00:21:48.919781 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-12 00:21:58.919792 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-12 00:22:08.919877 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-12 00:22:18.919813 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-12 00:22:28.919814 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-12 00:22:38.919689 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> kernel <==
00:22:41 up 7:05, 0 users, load average: 1.10, 2.72, 3.39
Linux old-k8s-version-385687 5.15.0-1072-aws #78~20.04.1-Ubuntu SMP Wed Oct 9 15:29:54 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kube-apiserver [b01ba427f07f] <==
I1212 00:19:24.717972 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1212 00:19:24.717981 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W1212 00:19:54.086598 1 handler_proxy.go:102] no RequestInfo found in the context
E1212 00:19:54.086843 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I1212 00:19:54.086861 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I1212 00:19:56.276651 1 client.go:360] parsed scheme: "passthrough"
I1212 00:19:56.276695 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1212 00:19:56.276705 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1212 00:20:33.200465 1 client.go:360] parsed scheme: "passthrough"
I1212 00:20:33.200513 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1212 00:20:33.200522 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1212 00:21:04.434120 1 client.go:360] parsed scheme: "passthrough"
I1212 00:21:04.434169 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1212 00:21:04.434178 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1212 00:21:45.606690 1 client.go:360] parsed scheme: "passthrough"
I1212 00:21:45.606741 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1212 00:21:45.606750 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W1212 00:21:51.672427 1 handler_proxy.go:102] no RequestInfo found in the context
E1212 00:21:51.672546 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I1212 00:21:51.672560 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I1212 00:22:27.887622 1 client.go:360] parsed scheme: "passthrough"
I1212 00:22:27.887667 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1212 00:22:27.887676 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-apiserver [d8c189ec5293] <==
W1212 00:16:11.239139 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1212 00:16:11.239179 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1212 00:16:11.239220 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1212 00:16:11.239262 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1212 00:16:11.239301 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1212 00:16:11.239339 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1212 00:16:11.239378 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1212 00:16:11.239467 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1212 00:16:11.239525 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1212 00:16:11.239580 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1212 00:16:11.239618 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1212 00:16:11.239659 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1212 00:16:11.239697 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1212 00:16:11.239738 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1212 00:16:11.239778 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1212 00:16:11.239818 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1212 00:16:11.239874 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1212 00:16:11.239915 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1212 00:16:11.239957 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1212 00:16:11.240000 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1212 00:16:11.240039 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1212 00:16:11.240082 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1212 00:16:11.240117 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1212 00:16:11.240176 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W1212 00:16:11.240214 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
==> kube-controller-manager [b9c1c621b89a] <==
I1212 00:14:36.503715 1 taint_manager.go:187] Starting NoExecuteTaintManager
I1212 00:14:36.509781 1 shared_informer.go:247] Caches are synced for node
I1212 00:14:36.510005 1 range_allocator.go:172] Starting range CIDR allocator
I1212 00:14:36.510126 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
I1212 00:14:36.510219 1 shared_informer.go:247] Caches are synced for cidrallocator
I1212 00:14:36.512676 1 shared_informer.go:247] Caches are synced for endpoint_slice
I1212 00:14:36.515032 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring
I1212 00:14:36.515142 1 shared_informer.go:247] Caches are synced for resource quota
I1212 00:14:36.517264 1 shared_informer.go:247] Caches are synced for resource quota
I1212 00:14:36.527042 1 range_allocator.go:373] Set node old-k8s-version-385687 PodCIDR to [10.244.0.0/24]
I1212 00:14:36.561178 1 shared_informer.go:247] Caches are synced for daemon sets
I1212 00:14:36.562353 1 shared_informer.go:247] Caches are synced for GC
I1212 00:14:36.562753 1 shared_informer.go:247] Caches are synced for attach detach
I1212 00:14:36.563603 1 shared_informer.go:247] Caches are synced for TTL
I1212 00:14:36.563728 1 shared_informer.go:247] Caches are synced for persistent volume
I1212 00:14:36.580117 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-dg295"
I1212 00:14:36.688803 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I1212 00:14:36.962425 1 shared_informer.go:247] Caches are synced for garbage collector
I1212 00:14:36.962466 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1212 00:14:36.988971 1 shared_informer.go:247] Caches are synced for garbage collector
I1212 00:14:38.158873 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
I1212 00:14:38.219997 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-zh6sz"
I1212 00:16:09.417761 1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
E1212 00:16:09.671017 1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
I1212 00:16:10.506125 1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-9975d5f86-4xxgq"
==> kube-controller-manager [d5db19c26736] <==
W1212 00:18:16.201203 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1212 00:18:42.251475 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1212 00:18:47.851685 1 request.go:655] Throttling request took 1.048505682s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W1212 00:18:48.703160 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1212 00:19:12.753342 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1212 00:19:20.353562 1 request.go:655] Throttling request took 1.048489616s, request: GET:https://192.168.76.2:8443/apis/batch/v1?timeout=32s
W1212 00:19:21.263516 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1212 00:19:43.255211 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1212 00:19:52.914132 1 request.go:655] Throttling request took 1.048450607s, request: GET:https://192.168.76.2:8443/apis/certificates.k8s.io/v1beta1?timeout=32s
W1212 00:19:53.765612 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1212 00:20:13.757018 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1212 00:20:25.416099 1 request.go:655] Throttling request took 1.048490038s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1?timeout=32s
W1212 00:20:26.269041 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1212 00:20:44.264296 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1212 00:20:57.919449 1 request.go:655] Throttling request took 1.048347574s, request: GET:https://192.168.76.2:8443/apis/discovery.k8s.io/v1beta1?timeout=32s
W1212 00:20:58.770945 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1212 00:21:14.766306 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1212 00:21:30.421292 1 request.go:655] Throttling request took 1.047953761s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W1212 00:21:31.280482 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1212 00:21:45.269442 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1212 00:22:02.930891 1 request.go:655] Throttling request took 1.048500823s, request: GET:https://192.168.76.2:8443/apis/autoscaling/v1?timeout=32s
W1212 00:22:03.782234 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1212 00:22:15.771244 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1212 00:22:35.432794 1 request.go:655] Throttling request took 1.048259416s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
W1212 00:22:36.284308 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
==> kube-proxy [0bb2da207c94] <==
I1212 00:16:55.105443 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I1212 00:16:55.105537 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W1212 00:16:55.142884 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I1212 00:16:55.143013 1 server_others.go:185] Using iptables Proxier.
I1212 00:16:55.143257 1 server.go:650] Version: v1.20.0
I1212 00:16:55.143809 1 config.go:315] Starting service config controller
I1212 00:16:55.143819 1 shared_informer.go:240] Waiting for caches to sync for service config
I1212 00:16:55.149509 1 config.go:224] Starting endpoint slice config controller
I1212 00:16:55.149564 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I1212 00:16:55.246699 1 shared_informer.go:247] Caches are synced for service config
I1212 00:16:55.249683 1 shared_informer.go:247] Caches are synced for endpoint slice config
==> kube-proxy [d2ad7ae21ec1] <==
I1212 00:14:37.714981 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I1212 00:14:37.715093 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W1212 00:14:37.843797 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I1212 00:14:37.843887 1 server_others.go:185] Using iptables Proxier.
I1212 00:14:37.844096 1 server.go:650] Version: v1.20.0
I1212 00:14:37.844583 1 config.go:315] Starting service config controller
I1212 00:14:37.844601 1 shared_informer.go:240] Waiting for caches to sync for service config
I1212 00:14:37.847186 1 config.go:224] Starting endpoint slice config controller
I1212 00:14:37.847207 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I1212 00:14:37.944758 1 shared_informer.go:247] Caches are synced for service config
I1212 00:14:37.950977 1 shared_informer.go:247] Caches are synced for endpoint slice config
==> kube-scheduler [4fb87ba7f570] <==
I1212 00:16:39.442004 1 serving.go:331] Generated self-signed cert in-memory
W1212 00:16:50.376201 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1212 00:16:50.376242 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1212 00:16:50.376251 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W1212 00:16:50.376257 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1212 00:16:50.755095 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I1212 00:16:50.766971 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1212 00:16:50.766994 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1212 00:16:50.767018 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I1212 00:16:50.983533 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kube-scheduler [e1f79e78d53d] <==
W1212 00:14:17.848547 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1212 00:14:17.848668 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W1212 00:14:17.848740 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1212 00:14:17.892840 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I1212 00:14:17.893082 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I1212 00:14:17.893111 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1212 00:14:17.897739 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
E1212 00:14:17.901202 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E1212 00:14:17.907508 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1212 00:14:17.907762 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1212 00:14:17.907970 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1212 00:14:17.908288 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1212 00:14:17.908373 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1212 00:14:17.908484 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1212 00:14:17.908539 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1212 00:14:17.908589 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1212 00:14:17.908640 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1212 00:14:17.908731 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1212 00:14:17.908799 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1212 00:14:18.749349 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E1212 00:14:18.780593 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1212 00:14:18.815577 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1212 00:14:18.961366 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
I1212 00:14:20.498933 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
W1212 00:16:11.077673 1 reflector.go:436] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: watch of *v1.ConfigMap ended with: very short watch: k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Unexpected watch close - watch lasted less than a second and no items received
==> kubelet <==
Dec 12 00:20:24 old-k8s-version-385687 kubelet[1416]: E1212 00:20:24.840888 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
Dec 12 00:20:28 old-k8s-version-385687 kubelet[1416]: E1212 00:20:28.253767 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 12 00:20:39 old-k8s-version-385687 kubelet[1416]: E1212 00:20:39.254296 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Dec 12 00:20:39 old-k8s-version-385687 kubelet[1416]: E1212 00:20:39.255523 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 12 00:20:50 old-k8s-version-385687 kubelet[1416]: E1212 00:20:50.260236 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Dec 12 00:20:54 old-k8s-version-385687 kubelet[1416]: E1212 00:20:54.253607 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 12 00:21:04 old-k8s-version-385687 kubelet[1416]: E1212 00:21:04.253856 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Dec 12 00:21:09 old-k8s-version-385687 kubelet[1416]: E1212 00:21:09.253849 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 12 00:21:17 old-k8s-version-385687 kubelet[1416]: E1212 00:21:17.254163 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Dec 12 00:21:20 old-k8s-version-385687 kubelet[1416]: E1212 00:21:20.254162 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 12 00:21:28 old-k8s-version-385687 kubelet[1416]: E1212 00:21:28.262765 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Dec 12 00:21:34 old-k8s-version-385687 kubelet[1416]: E1212 00:21:34.253963 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 12 00:21:41 old-k8s-version-385687 kubelet[1416]: E1212 00:21:41.253839 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Dec 12 00:21:46 old-k8s-version-385687 kubelet[1416]: E1212 00:21:46.258966 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 12 00:21:52 old-k8s-version-385687 kubelet[1416]: E1212 00:21:52.254181 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Dec 12 00:21:59 old-k8s-version-385687 kubelet[1416]: E1212 00:21:59.253931 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 12 00:22:05 old-k8s-version-385687 kubelet[1416]: E1212 00:22:05.253864 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Dec 12 00:22:13 old-k8s-version-385687 kubelet[1416]: E1212 00:22:13.253794 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 12 00:22:19 old-k8s-version-385687 kubelet[1416]: E1212 00:22:19.253937 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Dec 12 00:22:28 old-k8s-version-385687 kubelet[1416]: E1212 00:22:28.255844 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 12 00:22:34 old-k8s-version-385687 kubelet[1416]: E1212 00:22:34.269616 1416 pod_workers.go:191] Error syncing pod ccf557ea-788d-4158-b6aa-9905dd58a0c0 ("dashboard-metrics-scraper-8d5bb5db8-cvhql_kubernetes-dashboard(ccf557ea-788d-4158-b6aa-9905dd58a0c0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Dec 12 00:22:40 old-k8s-version-385687 kubelet[1416]: E1212 00:22:40.283026 1416 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Dec 12 00:22:40 old-k8s-version-385687 kubelet[1416]: E1212 00:22:40.283554 1416 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Dec 12 00:22:40 old-k8s-version-385687 kubelet[1416]: E1212 00:22:40.284035 1416 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>} BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-q6tlr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exe
c:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-4xxgq_kube-system(4651ba
e7-8ee0-4c76-8628-8843b8b5eb72): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Dec 12 00:22:40 old-k8s-version-385687 kubelet[1416]: E1212 00:22:40.284439 1416 pod_workers.go:191] Error syncing pod 4651bae7-8ee0-4c76-8628-8843b8b5eb72 ("metrics-server-9975d5f86-4xxgq_kube-system(4651bae7-8ee0-4c76-8628-8843b8b5eb72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
==> kubernetes-dashboard [eb5d61e0470f] <==
2024/12/12 00:17:19 Starting overwatch
2024/12/12 00:17:19 Using namespace: kubernetes-dashboard
2024/12/12 00:17:19 Using in-cluster config to connect to apiserver
2024/12/12 00:17:19 Using secret token for csrf signing
2024/12/12 00:17:19 Initializing csrf token from kubernetes-dashboard-csrf secret
2024/12/12 00:17:19 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2024/12/12 00:17:19 Successful initial request to the apiserver, version: v1.20.0
2024/12/12 00:17:19 Generating JWE encryption key
2024/12/12 00:17:19 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2024/12/12 00:17:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2024/12/12 00:17:19 Initializing JWE encryption key from synchronized object
2024/12/12 00:17:19 Creating in-cluster Sidecar client
2024/12/12 00:17:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/12 00:17:19 Serving insecurely on HTTP port: 9090
2024/12/12 00:17:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/12 00:18:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/12 00:18:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/12 00:19:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/12 00:19:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/12 00:20:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/12 00:20:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/12 00:21:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/12 00:21:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/12 00:22:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
==> storage-provisioner [5567aacfd876] <==
I1212 00:17:40.515558 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I1212 00:17:40.531097 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I1212 00:17:40.531219 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I1212 00:17:57.987654 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I1212 00:17:57.987923 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-385687_3afcf1ff-9c81-4114-8ad2-bc9eca3d9e50!
I1212 00:17:57.992257 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"da9d646d-71ea-4ca7-ba20-a13551745b98", APIVersion:"v1", ResourceVersion:"794", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-385687_3afcf1ff-9c81-4114-8ad2-bc9eca3d9e50 became leader
I1212 00:17:58.088343 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-385687_3afcf1ff-9c81-4114-8ad2-bc9eca3d9e50!
==> storage-provisioner [f96d450a47e8] <==
I1212 00:16:54.539551 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F1212 00:17:24.541846 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-385687 -n old-k8s-version-385687
helpers_test.go:261: (dbg) Run: kubectl --context old-k8s-version-385687 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-4xxgq dashboard-metrics-scraper-8d5bb5db8-cvhql
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context old-k8s-version-385687 describe pod metrics-server-9975d5f86-4xxgq dashboard-metrics-scraper-8d5bb5db8-cvhql
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-385687 describe pod metrics-server-9975d5f86-4xxgq dashboard-metrics-scraper-8d5bb5db8-cvhql: exit status 1 (88.878832ms)
** stderr **
Error from server (NotFound): pods "metrics-server-9975d5f86-4xxgq" not found
Error from server (NotFound): pods "dashboard-metrics-scraper-8d5bb5db8-cvhql" not found
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-385687 describe pod metrics-server-9975d5f86-4xxgq dashboard-metrics-scraper-8d5bb5db8-cvhql: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (379.86s)