=== RUN TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run: out/minikube-linux-arm64 start -p old-k8s-version-111858 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=docker --kubernetes-version=v1.20.0
E0730 03:40:59.909962 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/auto-264426/client.crt: no such file or directory
E0730 03:41:00.647030 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/enable-default-cni-264426/client.crt: no such file or directory
E0730 03:41:00.652432 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/enable-default-cni-264426/client.crt: no such file or directory
E0730 03:41:00.662758 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/enable-default-cni-264426/client.crt: no such file or directory
E0730 03:41:00.683020 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/enable-default-cni-264426/client.crt: no such file or directory
E0730 03:41:00.724053 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/enable-default-cni-264426/client.crt: no such file or directory
E0730 03:41:00.804166 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/enable-default-cni-264426/client.crt: no such file or directory
E0730 03:41:00.964636 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/enable-default-cni-264426/client.crt: no such file or directory
E0730 03:41:01.285299 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/enable-default-cni-264426/client.crt: no such file or directory
E0730 03:41:01.926340 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/enable-default-cni-264426/client.crt: no such file or directory
E0730 03:41:03.207165 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/enable-default-cni-264426/client.crt: no such file or directory
E0730 03:41:05.767569 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/enable-default-cni-264426/client.crt: no such file or directory
E0730 03:41:09.493760 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/functional-426050/client.crt: no such file or directory
E0730 03:41:10.887980 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/enable-default-cni-264426/client.crt: no such file or directory
E0730 03:41:14.720357 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/calico-264426/client.crt: no such file or directory
E0730 03:41:21.129205 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/enable-default-cni-264426/client.crt: no such file or directory
E0730 03:41:33.793793 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/false-264426/client.crt: no such file or directory
E0730 03:41:41.609674 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/enable-default-cni-264426/client.crt: no such file or directory
E0730 03:42:01.974055 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/flannel-264426/client.crt: no such file or directory
E0730 03:42:01.980094 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/flannel-264426/client.crt: no such file or directory
E0730 03:42:01.990474 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/flannel-264426/client.crt: no such file or directory
E0730 03:42:02.010809 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/flannel-264426/client.crt: no such file or directory
E0730 03:42:02.051208 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/flannel-264426/client.crt: no such file or directory
E0730 03:42:02.131494 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/flannel-264426/client.crt: no such file or directory
E0730 03:42:02.291898 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/flannel-264426/client.crt: no such file or directory
E0730 03:42:02.612514 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/flannel-264426/client.crt: no such file or directory
E0730 03:42:03.253395 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/flannel-264426/client.crt: no such file or directory
E0730 03:42:04.533954 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/flannel-264426/client.crt: no such file or directory
E0730 03:42:07.094099 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/flannel-264426/client.crt: no such file or directory
E0730 03:42:11.292894 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/custom-flannel-264426/client.crt: no such file or directory
E0730 03:42:12.214802 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/flannel-264426/client.crt: no such file or directory
E0730 03:42:22.455608 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/flannel-264426/client.crt: no such file or directory
E0730 03:42:22.569887 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/enable-default-cni-264426/client.crt: no such file or directory
E0730 03:42:27.028452 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/kindnet-264426/client.crt: no such file or directory
E0730 03:42:42.936575 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/flannel-264426/client.crt: no such file or directory
E0730 03:42:44.125692 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/bridge-264426/client.crt: no such file or directory
E0730 03:42:44.130968 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/bridge-264426/client.crt: no such file or directory
E0730 03:42:44.141288 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/bridge-264426/client.crt: no such file or directory
E0730 03:42:44.161631 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/bridge-264426/client.crt: no such file or directory
E0730 03:42:44.201890 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/bridge-264426/client.crt: no such file or directory
E0730 03:42:44.282739 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/bridge-264426/client.crt: no such file or directory
E0730 03:42:44.443185 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/bridge-264426/client.crt: no such file or directory
E0730 03:42:44.763757 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/bridge-264426/client.crt: no such file or directory
E0730 03:42:45.404157 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/bridge-264426/client.crt: no such file or directory
E0730 03:42:46.684841 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/bridge-264426/client.crt: no such file or directory
E0730 03:42:49.245567 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/bridge-264426/client.crt: no such file or directory
E0730 03:42:54.366448 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/bridge-264426/client.crt: no such file or directory
E0730 03:42:54.712588 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/kindnet-264426/client.crt: no such file or directory
E0730 03:42:55.714020 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/false-264426/client.crt: no such file or directory
E0730 03:43:04.606632 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/bridge-264426/client.crt: no such file or directory
E0730 03:43:23.896854 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/flannel-264426/client.crt: no such file or directory
E0730 03:43:25.086890 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/bridge-264426/client.crt: no such file or directory
E0730 03:43:30.876519 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/calico-264426/client.crt: no such file or directory
E0730 03:43:44.490452 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/enable-default-cni-264426/client.crt: no such file or directory
E0730 03:43:44.581770 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/kubenet-264426/client.crt: no such file or directory
E0730 03:43:44.587006 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/kubenet-264426/client.crt: no such file or directory
E0730 03:43:44.597355 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/kubenet-264426/client.crt: no such file or directory
E0730 03:43:44.617658 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/kubenet-264426/client.crt: no such file or directory
E0730 03:43:44.657984 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/kubenet-264426/client.crt: no such file or directory
E0730 03:43:44.738363 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/kubenet-264426/client.crt: no such file or directory
E0730 03:43:44.898741 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/kubenet-264426/client.crt: no such file or directory
E0730 03:43:45.219306 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/kubenet-264426/client.crt: no such file or directory
E0730 03:43:45.860388 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/kubenet-264426/client.crt: no such file or directory
E0730 03:43:47.140847 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/kubenet-264426/client.crt: no such file or directory
E0730 03:43:49.701051 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/kubenet-264426/client.crt: no such file or directory
E0730 03:43:51.004421 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/skaffold-183379/client.crt: no such file or directory
E0730 03:43:54.821855 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/kubenet-264426/client.crt: no such file or directory
E0730 03:43:58.560782 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/calico-264426/client.crt: no such file or directory
E0730 03:44:05.062893 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/kubenet-264426/client.crt: no such file or directory
E0730 03:44:06.047915 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/bridge-264426/client.crt: no such file or directory
E0730 03:44:25.543135 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/kubenet-264426/client.crt: no such file or directory
E0730 03:44:27.445435 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/custom-flannel-264426/client.crt: no such file or directory
E0730 03:44:32.864765 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/addons-001594/client.crt: no such file or directory
E0730 03:44:45.817398 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/flannel-264426/client.crt: no such file or directory
E0730 03:44:55.133157 658178 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/custom-flannel-264426/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-111858 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=docker --kubernetes-version=v1.20.0: exit status 102 (6m13.34280851s)
-- stdout --
* [old-k8s-version-111858] minikube v1.33.1 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=19347
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/19347-652786/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/19347-652786/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
* Using the docker driver based on existing profile
* Starting "old-k8s-version-111858" primary control-plane node in "old-k8s-version-111858" cluster
* Pulling base image v0.0.44-1721902582-19326 ...
* Restarting existing docker container for "old-k8s-version-111858" ...
* Preparing Kubernetes v1.20.0 on Docker 27.1.1 ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
- Using image fake.domain/registry.k8s.io/echoserver:1.4
- Using image registry.k8s.io/echoserver:1.4
- Using image docker.io/kubernetesui/dashboard:v2.7.0
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-111858 addons enable metrics-server
* Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
-- /stdout --
** stderr **
I0730 03:40:54.327303 1043928 out.go:291] Setting OutFile to fd 1 ...
I0730 03:40:54.327711 1043928 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0730 03:40:54.327721 1043928 out.go:304] Setting ErrFile to fd 2...
I0730 03:40:54.327727 1043928 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0730 03:40:54.327989 1043928 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19347-652786/.minikube/bin
I0730 03:40:54.328372 1043928 out.go:298] Setting JSON to false
I0730 03:40:54.329658 1043928 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":84199,"bootTime":1722226656,"procs":264,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
I0730 03:40:54.330054 1043928 start.go:139] virtualization:
I0730 03:40:54.333174 1043928 out.go:177] * [old-k8s-version-111858] minikube v1.33.1 on Ubuntu 20.04 (arm64)
I0730 03:40:54.335861 1043928 out.go:177] - MINIKUBE_LOCATION=19347
I0730 03:40:54.335941 1043928 notify.go:220] Checking for updates...
I0730 03:40:54.345070 1043928 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0730 03:40:54.347789 1043928 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19347-652786/kubeconfig
I0730 03:40:54.350345 1043928 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19347-652786/.minikube
I0730 03:40:54.353632 1043928 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0730 03:40:54.356283 1043928 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0730 03:40:54.359422 1043928 config.go:182] Loaded profile config "old-k8s-version-111858": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
I0730 03:40:54.362730 1043928 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
I0730 03:40:54.365408 1043928 driver.go:392] Setting default libvirt URI to qemu:///system
I0730 03:40:54.400681 1043928 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
I0730 03:40:54.400843 1043928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0730 03:40:54.488975 1043928 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2024-07-30 03:40:54.479482168 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214904832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
I0730 03:40:54.489115 1043928 docker.go:307] overlay module found
I0730 03:40:54.492481 1043928 out.go:177] * Using the docker driver based on existing profile
I0730 03:40:54.495153 1043928 start.go:297] selected driver: docker
I0730 03:40:54.495174 1043928 start.go:901] validating driver "docker" against &{Name:old-k8s-version-111858 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-111858 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0730 03:40:54.495293 1043928 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0730 03:40:54.495885 1043928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0730 03:40:54.593317 1043928 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2024-07-30 03:40:54.583257764 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214904832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
I0730 03:40:54.594030 1043928 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0730 03:40:54.594062 1043928 cni.go:84] Creating CNI manager for ""
I0730 03:40:54.594076 1043928 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I0730 03:40:54.594131 1043928 start.go:340] cluster config:
{Name:old-k8s-version-111858 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-111858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0730 03:40:54.597296 1043928 out.go:177] * Starting "old-k8s-version-111858" primary control-plane node in "old-k8s-version-111858" cluster
I0730 03:40:54.600082 1043928 cache.go:121] Beginning downloading kic base image for docker with docker
I0730 03:40:54.604498 1043928 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
I0730 03:40:54.607370 1043928 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0730 03:40:54.607429 1043928 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19347-652786/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
I0730 03:40:54.607442 1043928 cache.go:56] Caching tarball of preloaded images
I0730 03:40:54.607455 1043928 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
I0730 03:40:54.607525 1043928 preload.go:172] Found /home/jenkins/minikube-integration/19347-652786/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0730 03:40:54.607534 1043928 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
I0730 03:40:54.607656 1043928 profile.go:143] Saving config to /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/old-k8s-version-111858/config.json ...
W0730 03:40:54.628127 1043928 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
I0730 03:40:54.628147 1043928 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
I0730 03:40:54.628256 1043928 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
I0730 03:40:54.628280 1043928 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
I0730 03:40:54.628288 1043928 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
I0730 03:40:54.628296 1043928 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
I0730 03:40:54.628309 1043928 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
I0730 03:40:54.758434 1043928 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
I0730 03:40:54.758476 1043928 cache.go:194] Successfully downloaded all kic artifacts
I0730 03:40:54.758520 1043928 start.go:360] acquireMachinesLock for old-k8s-version-111858: {Name:mkaad0ecb1cff3c0de68e365879398d019245f4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0730 03:40:54.758588 1043928 start.go:364] duration metric: took 47.491µs to acquireMachinesLock for "old-k8s-version-111858"
I0730 03:40:54.758618 1043928 start.go:96] Skipping create...Using existing machine configuration
I0730 03:40:54.758624 1043928 fix.go:54] fixHost starting:
I0730 03:40:54.758941 1043928 cli_runner.go:164] Run: docker container inspect old-k8s-version-111858 --format={{.State.Status}}
I0730 03:40:54.779018 1043928 fix.go:112] recreateIfNeeded on old-k8s-version-111858: state=Stopped err=<nil>
W0730 03:40:54.779072 1043928 fix.go:138] unexpected machine state, will restart: <nil>
I0730 03:40:54.782943 1043928 out.go:177] * Restarting existing docker container for "old-k8s-version-111858" ...
I0730 03:40:54.785506 1043928 cli_runner.go:164] Run: docker start old-k8s-version-111858
I0730 03:40:55.146815 1043928 cli_runner.go:164] Run: docker container inspect old-k8s-version-111858 --format={{.State.Status}}
I0730 03:40:55.178871 1043928 kic.go:430] container "old-k8s-version-111858" state is running.
I0730 03:40:55.179286 1043928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-111858
I0730 03:40:55.209603 1043928 profile.go:143] Saving config to /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/old-k8s-version-111858/config.json ...
I0730 03:40:55.210015 1043928 machine.go:94] provisionDockerMachine start ...
I0730 03:40:55.210133 1043928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-111858
I0730 03:40:55.234435 1043928 main.go:141] libmachine: Using SSH client type: native
I0730 03:40:55.234767 1043928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil> [] 0s} 127.0.0.1 38754 <nil> <nil>}
I0730 03:40:55.234777 1043928 main.go:141] libmachine: About to run SSH command:
hostname
I0730 03:40:55.235453 1043928 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I0730 03:40:58.369170 1043928 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-111858
I0730 03:40:58.369197 1043928 ubuntu.go:169] provisioning hostname "old-k8s-version-111858"
I0730 03:40:58.369260 1043928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-111858
I0730 03:40:58.388503 1043928 main.go:141] libmachine: Using SSH client type: native
I0730 03:40:58.388773 1043928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil> [] 0s} 127.0.0.1 38754 <nil> <nil>}
I0730 03:40:58.388793 1043928 main.go:141] libmachine: About to run SSH command:
sudo hostname old-k8s-version-111858 && echo "old-k8s-version-111858" | sudo tee /etc/hostname
I0730 03:40:58.536400 1043928 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-111858
I0730 03:40:58.536514 1043928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-111858
I0730 03:40:58.553972 1043928 main.go:141] libmachine: Using SSH client type: native
I0730 03:40:58.554232 1043928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil> [] 0s} 127.0.0.1 38754 <nil> <nil>}
I0730 03:40:58.554255 1043928 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-111858' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-111858/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-111858' | sudo tee -a /etc/hosts;
fi
fi
I0730 03:40:58.685972 1043928 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0730 03:40:58.686002 1043928 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19347-652786/.minikube CaCertPath:/home/jenkins/minikube-integration/19347-652786/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19347-652786/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19347-652786/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19347-652786/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19347-652786/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19347-652786/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19347-652786/.minikube}
I0730 03:40:58.686027 1043928 ubuntu.go:177] setting up certificates
I0730 03:40:58.686042 1043928 provision.go:84] configureAuth start
I0730 03:40:58.686115 1043928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-111858
I0730 03:40:58.703755 1043928 provision.go:143] copyHostCerts
I0730 03:40:58.703841 1043928 exec_runner.go:144] found /home/jenkins/minikube-integration/19347-652786/.minikube/ca.pem, removing ...
I0730 03:40:58.703855 1043928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19347-652786/.minikube/ca.pem
I0730 03:40:58.703960 1043928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19347-652786/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19347-652786/.minikube/ca.pem (1082 bytes)
I0730 03:40:58.704088 1043928 exec_runner.go:144] found /home/jenkins/minikube-integration/19347-652786/.minikube/cert.pem, removing ...
I0730 03:40:58.704098 1043928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19347-652786/.minikube/cert.pem
I0730 03:40:58.704132 1043928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19347-652786/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19347-652786/.minikube/cert.pem (1123 bytes)
I0730 03:40:58.704201 1043928 exec_runner.go:144] found /home/jenkins/minikube-integration/19347-652786/.minikube/key.pem, removing ...
I0730 03:40:58.704215 1043928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19347-652786/.minikube/key.pem
I0730 03:40:58.704241 1043928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19347-652786/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19347-652786/.minikube/key.pem (1679 bytes)
I0730 03:40:58.704298 1043928 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19347-652786/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19347-652786/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19347-652786/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-111858 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-111858]
I0730 03:40:59.734840 1043928 provision.go:177] copyRemoteCerts
I0730 03:40:59.734918 1043928 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0730 03:40:59.734964 1043928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-111858
I0730 03:40:59.753299 1043928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38754 SSHKeyPath:/home/jenkins/minikube-integration/19347-652786/.minikube/machines/old-k8s-version-111858/id_rsa Username:docker}
I0730 03:40:59.851287 1043928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-652786/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0730 03:40:59.880461 1043928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-652786/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0730 03:40:59.906669 1043928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-652786/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I0730 03:40:59.932997 1043928 provision.go:87] duration metric: took 1.246912073s to configureAuth
I0730 03:40:59.933028 1043928 ubuntu.go:193] setting minikube options for container-runtime
I0730 03:40:59.933251 1043928 config.go:182] Loaded profile config "old-k8s-version-111858": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
I0730 03:40:59.933323 1043928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-111858
I0730 03:40:59.950800 1043928 main.go:141] libmachine: Using SSH client type: native
I0730 03:40:59.951047 1043928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil> [] 0s} 127.0.0.1 38754 <nil> <nil>}
I0730 03:40:59.951064 1043928 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0730 03:41:00.259598 1043928 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0730 03:41:00.259697 1043928 ubuntu.go:71] root file system type: overlay
I0730 03:41:00.260654 1043928 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0730 03:41:00.260857 1043928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-111858
I0730 03:41:00.341077 1043928 main.go:141] libmachine: Using SSH client type: native
I0730 03:41:00.341353 1043928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil> [] 0s} 127.0.0.1 38754 <nil> <nil>}
I0730 03:41:00.341452 1043928 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0730 03:41:00.521453 1043928 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0730 03:41:00.521640 1043928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-111858
I0730 03:41:00.541141 1043928 main.go:141] libmachine: Using SSH client type: native
I0730 03:41:00.541393 1043928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil> [] 0s} 127.0.0.1 38754 <nil> <nil>}
I0730 03:41:00.541411 1043928 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0730 03:41:00.684028 1043928 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0730 03:41:00.684054 1043928 machine.go:97] duration metric: took 5.474026026s to provisionDockerMachine
I0730 03:41:00.684066 1043928 start.go:293] postStartSetup for "old-k8s-version-111858" (driver="docker")
I0730 03:41:00.684077 1043928 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0730 03:41:00.684146 1043928 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0730 03:41:00.684207 1043928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-111858
I0730 03:41:00.702737 1043928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38754 SSHKeyPath:/home/jenkins/minikube-integration/19347-652786/.minikube/machines/old-k8s-version-111858/id_rsa Username:docker}
I0730 03:41:00.800232 1043928 ssh_runner.go:195] Run: cat /etc/os-release
I0730 03:41:00.803819 1043928 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0730 03:41:00.803873 1043928 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0730 03:41:00.803892 1043928 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0730 03:41:00.803900 1043928 info.go:137] Remote host: Ubuntu 22.04.4 LTS
I0730 03:41:00.803911 1043928 filesync.go:126] Scanning /home/jenkins/minikube-integration/19347-652786/.minikube/addons for local assets ...
I0730 03:41:00.803982 1043928 filesync.go:126] Scanning /home/jenkins/minikube-integration/19347-652786/.minikube/files for local assets ...
I0730 03:41:00.804085 1043928 filesync.go:149] local asset: /home/jenkins/minikube-integration/19347-652786/.minikube/files/etc/ssl/certs/6581782.pem -> 6581782.pem in /etc/ssl/certs
I0730 03:41:00.804196 1043928 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0730 03:41:00.813872 1043928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-652786/.minikube/files/etc/ssl/certs/6581782.pem --> /etc/ssl/certs/6581782.pem (1708 bytes)
I0730 03:41:00.840722 1043928 start.go:296] duration metric: took 156.639423ms for postStartSetup
I0730 03:41:00.840808 1043928 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0730 03:41:00.840851 1043928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-111858
I0730 03:41:00.858608 1043928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38754 SSHKeyPath:/home/jenkins/minikube-integration/19347-652786/.minikube/machines/old-k8s-version-111858/id_rsa Username:docker}
I0730 03:41:00.950600 1043928 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0730 03:41:00.955236 1043928 fix.go:56] duration metric: took 6.196603115s for fixHost
I0730 03:41:00.955303 1043928 start.go:83] releasing machines lock for "old-k8s-version-111858", held for 6.196697227s
I0730 03:41:00.955381 1043928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-111858
I0730 03:41:00.973164 1043928 ssh_runner.go:195] Run: cat /version.json
I0730 03:41:00.973180 1043928 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0730 03:41:00.973221 1043928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-111858
I0730 03:41:00.973239 1043928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-111858
I0730 03:41:00.995301 1043928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38754 SSHKeyPath:/home/jenkins/minikube-integration/19347-652786/.minikube/machines/old-k8s-version-111858/id_rsa Username:docker}
I0730 03:41:01.009851 1043928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38754 SSHKeyPath:/home/jenkins/minikube-integration/19347-652786/.minikube/machines/old-k8s-version-111858/id_rsa Username:docker}
I0730 03:41:01.097902 1043928 ssh_runner.go:195] Run: systemctl --version
I0730 03:41:01.237689 1043928 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0730 03:41:01.242442 1043928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0730 03:41:01.263495 1043928 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0730 03:41:01.263607 1043928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
I0730 03:41:01.284086 1043928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
I0730 03:41:01.304233 1043928 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0730 03:41:01.304322 1043928 start.go:495] detecting cgroup driver to use...
I0730 03:41:01.304365 1043928 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0730 03:41:01.304510 1043928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0730 03:41:01.327185 1043928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I0730 03:41:01.338510 1043928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0730 03:41:01.349534 1043928 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0730 03:41:01.349692 1043928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0730 03:41:01.360375 1043928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0730 03:41:01.371679 1043928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0730 03:41:01.385273 1043928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0730 03:41:01.395988 1043928 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0730 03:41:01.406480 1043928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0730 03:41:01.417744 1043928 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0730 03:41:01.426948 1043928 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0730 03:41:01.435905 1043928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0730 03:41:01.527823 1043928 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0730 03:41:01.648066 1043928 start.go:495] detecting cgroup driver to use...
I0730 03:41:01.648155 1043928 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0730 03:41:01.648238 1043928 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0730 03:41:01.667484 1043928 cruntime.go:279] skipping containerd shutdown because we are bound to it
I0730 03:41:01.667598 1043928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0730 03:41:01.689210 1043928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0730 03:41:01.710942 1043928 ssh_runner.go:195] Run: which cri-dockerd
I0730 03:41:01.715324 1043928 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0730 03:41:01.727355 1043928 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0730 03:41:01.754328 1043928 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0730 03:41:01.870043 1043928 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0730 03:41:01.978349 1043928 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0730 03:41:01.978527 1043928 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0730 03:41:02.002214 1043928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0730 03:41:02.119682 1043928 ssh_runner.go:195] Run: sudo systemctl restart docker
I0730 03:41:02.624835 1043928 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0730 03:41:02.649619 1043928 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0730 03:41:02.677468 1043928 out.go:204] * Preparing Kubernetes v1.20.0 on Docker 27.1.1 ...
I0730 03:41:02.677606 1043928 cli_runner.go:164] Run: docker network inspect old-k8s-version-111858 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0730 03:41:02.692280 1043928 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I0730 03:41:02.696454 1043928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0730 03:41:02.708570 1043928 kubeadm.go:883] updating cluster {Name:old-k8s-version-111858 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-111858 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkin
s:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0730 03:41:02.708720 1043928 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0730 03:41:02.708788 1043928 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0730 03:41:02.727969 1043928 docker.go:685] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/kube-proxy:v1.20.0
registry.k8s.io/kube-proxy:v1.20.0
k8s.gcr.io/kube-apiserver:v1.20.0
registry.k8s.io/kube-apiserver:v1.20.0
k8s.gcr.io/kube-controller-manager:v1.20.0
registry.k8s.io/kube-controller-manager:v1.20.0
k8s.gcr.io/kube-scheduler:v1.20.0
registry.k8s.io/kube-scheduler:v1.20.0
registry.k8s.io/etcd:3.4.13-0
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
registry.k8s.io/coredns:1.7.0
k8s.gcr.io/pause:3.2
registry.k8s.io/pause:3.2
gcr.io/k8s-minikube/busybox:1.28.4-glibc
-- /stdout --
I0730 03:41:02.727990 1043928 docker.go:615] Images already preloaded, skipping extraction
I0730 03:41:02.728057 1043928 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0730 03:41:02.759371 1043928 docker.go:685] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/kube-proxy:v1.20.0
registry.k8s.io/kube-proxy:v1.20.0
k8s.gcr.io/kube-controller-manager:v1.20.0
registry.k8s.io/kube-controller-manager:v1.20.0
k8s.gcr.io/kube-apiserver:v1.20.0
registry.k8s.io/kube-apiserver:v1.20.0
k8s.gcr.io/kube-scheduler:v1.20.0
registry.k8s.io/kube-scheduler:v1.20.0
k8s.gcr.io/etcd:3.4.13-0
registry.k8s.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
registry.k8s.io/coredns:1.7.0
registry.k8s.io/pause:3.2
k8s.gcr.io/pause:3.2
gcr.io/k8s-minikube/busybox:1.28.4-glibc
-- /stdout --
I0730 03:41:02.759395 1043928 cache_images.go:84] Images are preloaded, skipping loading
I0730 03:41:02.759405 1043928 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 docker true true} ...
I0730 03:41:02.759529 1043928 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-111858 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-111858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0730 03:41:02.759596 1043928 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0730 03:41:02.816807 1043928 cni.go:84] Creating CNI manager for ""
I0730 03:41:02.816886 1043928 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I0730 03:41:02.816909 1043928 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0730 03:41:02.816960 1043928 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-111858 NodeName:old-k8s-version-111858 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0730 03:41:02.817143 1043928 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "old-k8s-version-111858"
kubeletExtraArgs:
node-ip: 192.168.85.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0730 03:41:02.817256 1043928 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I0730 03:41:02.827071 1043928 binaries.go:44] Found k8s binaries, skipping transfer
I0730 03:41:02.827161 1043928 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0730 03:41:02.836336 1043928 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
I0730 03:41:02.856010 1043928 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0730 03:41:02.876021 1043928 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2118 bytes)
I0730 03:41:02.894909 1043928 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I0730 03:41:02.899024 1043928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0730 03:41:02.912397 1043928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0730 03:41:03.009974 1043928 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0730 03:41:03.029401 1043928 certs.go:68] Setting up /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/old-k8s-version-111858 for IP: 192.168.85.2
I0730 03:41:03.029491 1043928 certs.go:194] generating shared ca certs ...
I0730 03:41:03.029522 1043928 certs.go:226] acquiring lock for ca certs: {Name:mkd5662d6a9243b34d7b6c08a80c493f8c01d7c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0730 03:41:03.029736 1043928 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19347-652786/.minikube/ca.key
I0730 03:41:03.029882 1043928 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19347-652786/.minikube/proxy-client-ca.key
I0730 03:41:03.029911 1043928 certs.go:256] generating profile certs ...
I0730 03:41:03.030055 1043928 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/old-k8s-version-111858/client.key
I0730 03:41:03.030243 1043928 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/old-k8s-version-111858/apiserver.key.63a1e1a0
I0730 03:41:03.030520 1043928 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/old-k8s-version-111858/proxy-client.key
I0730 03:41:03.030700 1043928 certs.go:484] found cert: /home/jenkins/minikube-integration/19347-652786/.minikube/certs/658178.pem (1338 bytes)
W0730 03:41:03.030757 1043928 certs.go:480] ignoring /home/jenkins/minikube-integration/19347-652786/.minikube/certs/658178_empty.pem, impossibly tiny 0 bytes
I0730 03:41:03.030782 1043928 certs.go:484] found cert: /home/jenkins/minikube-integration/19347-652786/.minikube/certs/ca-key.pem (1675 bytes)
I0730 03:41:03.030845 1043928 certs.go:484] found cert: /home/jenkins/minikube-integration/19347-652786/.minikube/certs/ca.pem (1082 bytes)
I0730 03:41:03.030896 1043928 certs.go:484] found cert: /home/jenkins/minikube-integration/19347-652786/.minikube/certs/cert.pem (1123 bytes)
I0730 03:41:03.030953 1043928 certs.go:484] found cert: /home/jenkins/minikube-integration/19347-652786/.minikube/certs/key.pem (1679 bytes)
I0730 03:41:03.031030 1043928 certs.go:484] found cert: /home/jenkins/minikube-integration/19347-652786/.minikube/files/etc/ssl/certs/6581782.pem (1708 bytes)
I0730 03:41:03.031722 1043928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-652786/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0730 03:41:03.063338 1043928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-652786/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0730 03:41:03.091978 1043928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-652786/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0730 03:41:03.119343 1043928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-652786/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0730 03:41:03.146142 1043928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/old-k8s-version-111858/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
I0730 03:41:03.172895 1043928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/old-k8s-version-111858/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0730 03:41:03.211853 1043928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/old-k8s-version-111858/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0730 03:41:03.245819 1043928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/old-k8s-version-111858/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0730 03:41:03.289499 1043928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-652786/.minikube/files/etc/ssl/certs/6581782.pem --> /usr/share/ca-certificates/6581782.pem (1708 bytes)
I0730 03:41:03.325795 1043928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-652786/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0730 03:41:03.359935 1043928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-652786/.minikube/certs/658178.pem --> /usr/share/ca-certificates/658178.pem (1338 bytes)
I0730 03:41:03.388189 1043928 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0730 03:41:03.407624 1043928 ssh_runner.go:195] Run: openssl version
I0730 03:41:03.413391 1043928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6581782.pem && ln -fs /usr/share/ca-certificates/6581782.pem /etc/ssl/certs/6581782.pem"
I0730 03:41:03.423974 1043928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6581782.pem
I0730 03:41:03.427857 1043928 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 30 02:43 /usr/share/ca-certificates/6581782.pem
I0730 03:41:03.427928 1043928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6581782.pem
I0730 03:41:03.435227 1043928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6581782.pem /etc/ssl/certs/3ec20f2e.0"
I0730 03:41:03.444684 1043928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0730 03:41:03.454961 1043928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0730 03:41:03.459047 1043928 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 30 02:36 /usr/share/ca-certificates/minikubeCA.pem
I0730 03:41:03.459129 1043928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0730 03:41:03.466353 1043928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0730 03:41:03.475785 1043928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/658178.pem && ln -fs /usr/share/ca-certificates/658178.pem /etc/ssl/certs/658178.pem"
I0730 03:41:03.485523 1043928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/658178.pem
I0730 03:41:03.489384 1043928 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 30 02:43 /usr/share/ca-certificates/658178.pem
I0730 03:41:03.489451 1043928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/658178.pem
I0730 03:41:03.497220 1043928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/658178.pem /etc/ssl/certs/51391683.0"
I0730 03:41:03.506563 1043928 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0730 03:41:03.510432 1043928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0730 03:41:03.519260 1043928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0730 03:41:03.527073 1043928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0730 03:41:03.534121 1043928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0730 03:41:03.541392 1043928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0730 03:41:03.548366 1043928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0730 03:41:03.555542 1043928 kubeadm.go:392] StartCluster: {Name:old-k8s-version-111858 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-111858 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0730 03:41:03.555708 1043928 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0730 03:41:03.573096 1043928 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0730 03:41:03.582451 1043928 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0730 03:41:03.582514 1043928 kubeadm.go:593] restartPrimaryControlPlane start ...
I0730 03:41:03.582572 1043928 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0730 03:41:03.591510 1043928 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0730 03:41:03.592383 1043928 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-111858" does not appear in /home/jenkins/minikube-integration/19347-652786/kubeconfig
I0730 03:41:03.592880 1043928 kubeconfig.go:62] /home/jenkins/minikube-integration/19347-652786/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-111858" cluster setting kubeconfig missing "old-k8s-version-111858" context setting]
I0730 03:41:03.593676 1043928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19347-652786/kubeconfig: {Name:mk305a6aba596aa7115323de6e57c59ca62a0dbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0730 03:41:03.595325 1043928 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0730 03:41:03.604276 1043928 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
I0730 03:41:03.604360 1043928 kubeadm.go:597] duration metric: took 21.835221ms to restartPrimaryControlPlane
I0730 03:41:03.604380 1043928 kubeadm.go:394] duration metric: took 48.845341ms to StartCluster
I0730 03:41:03.604398 1043928 settings.go:142] acquiring lock: {Name:mk99a6c16e82d0c6a2db8fb43f237845d315971d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0730 03:41:03.604467 1043928 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19347-652786/kubeconfig
I0730 03:41:03.606000 1043928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19347-652786/kubeconfig: {Name:mk305a6aba596aa7115323de6e57c59ca62a0dbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0730 03:41:03.606234 1043928 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I0730 03:41:03.606536 1043928 config.go:182] Loaded profile config "old-k8s-version-111858": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
I0730 03:41:03.606576 1043928 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0730 03:41:03.606648 1043928 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-111858"
I0730 03:41:03.606670 1043928 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-111858"
W0730 03:41:03.606676 1043928 addons.go:243] addon storage-provisioner should already be in state true
I0730 03:41:03.606702 1043928 host.go:66] Checking if "old-k8s-version-111858" exists ...
I0730 03:41:03.606727 1043928 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-111858"
I0730 03:41:03.606787 1043928 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-111858"
I0730 03:41:03.607138 1043928 cli_runner.go:164] Run: docker container inspect old-k8s-version-111858 --format={{.State.Status}}
I0730 03:41:03.607365 1043928 cli_runner.go:164] Run: docker container inspect old-k8s-version-111858 --format={{.State.Status}}
I0730 03:41:03.607679 1043928 addons.go:69] Setting dashboard=true in profile "old-k8s-version-111858"
I0730 03:41:03.607726 1043928 addons.go:234] Setting addon dashboard=true in "old-k8s-version-111858"
W0730 03:41:03.607738 1043928 addons.go:243] addon dashboard should already be in state true
I0730 03:41:03.607761 1043928 host.go:66] Checking if "old-k8s-version-111858" exists ...
I0730 03:41:03.608195 1043928 cli_runner.go:164] Run: docker container inspect old-k8s-version-111858 --format={{.State.Status}}
I0730 03:41:03.610553 1043928 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-111858"
I0730 03:41:03.610596 1043928 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-111858"
W0730 03:41:03.610604 1043928 addons.go:243] addon metrics-server should already be in state true
I0730 03:41:03.610633 1043928 host.go:66] Checking if "old-k8s-version-111858" exists ...
I0730 03:41:03.610919 1043928 out.go:177] * Verifying Kubernetes components...
I0730 03:41:03.612684 1043928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0730 03:41:03.613708 1043928 cli_runner.go:164] Run: docker container inspect old-k8s-version-111858 --format={{.State.Status}}
I0730 03:41:03.651059 1043928 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-111858"
W0730 03:41:03.651083 1043928 addons.go:243] addon default-storageclass should already be in state true
I0730 03:41:03.651109 1043928 host.go:66] Checking if "old-k8s-version-111858" exists ...
I0730 03:41:03.651552 1043928 cli_runner.go:164] Run: docker container inspect old-k8s-version-111858 --format={{.State.Status}}
I0730 03:41:03.665646 1043928 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0730 03:41:03.668149 1043928 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0730 03:41:03.668174 1043928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0730 03:41:03.668241 1043928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-111858
I0730 03:41:03.687267 1043928 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0730 03:41:03.687541 1043928 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0730 03:41:03.689514 1043928 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0730 03:41:03.689631 1043928 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0730 03:41:03.689643 1043928 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0730 03:41:03.689708 1043928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-111858
I0730 03:41:03.691893 1043928 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0730 03:41:03.691912 1043928 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0730 03:41:03.691983 1043928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-111858
I0730 03:41:03.727191 1043928 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0730 03:41:03.727212 1043928 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0730 03:41:03.727289 1043928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-111858
I0730 03:41:03.733841 1043928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38754 SSHKeyPath:/home/jenkins/minikube-integration/19347-652786/.minikube/machines/old-k8s-version-111858/id_rsa Username:docker}
I0730 03:41:03.745759 1043928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38754 SSHKeyPath:/home/jenkins/minikube-integration/19347-652786/.minikube/machines/old-k8s-version-111858/id_rsa Username:docker}
I0730 03:41:03.794788 1043928 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0730 03:41:03.806040 1043928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38754 SSHKeyPath:/home/jenkins/minikube-integration/19347-652786/.minikube/machines/old-k8s-version-111858/id_rsa Username:docker}
I0730 03:41:03.815293 1043928 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-111858" to be "Ready" ...
I0730 03:41:03.815737 1043928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38754 SSHKeyPath:/home/jenkins/minikube-integration/19347-652786/.minikube/machines/old-k8s-version-111858/id_rsa Username:docker}
I0730 03:41:03.878778 1043928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0730 03:41:03.887955 1043928 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0730 03:41:03.888039 1043928 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0730 03:41:03.920405 1043928 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0730 03:41:03.920436 1043928 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0730 03:41:03.952111 1043928 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0730 03:41:03.952150 1043928 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0730 03:41:03.983260 1043928 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0730 03:41:03.983285 1043928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0730 03:41:03.987221 1043928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0730 03:41:04.006911 1043928 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0730 03:41:04.006940 1043928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0730 03:41:04.048216 1043928 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0730 03:41:04.048244 1043928 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
W0730 03:41:04.052448 1043928 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0730 03:41:04.052486 1043928 retry.go:31] will retry after 333.710544ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0730 03:41:04.057706 1043928 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
I0730 03:41:04.057734 1043928 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0730 03:41:04.081330 1043928 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0730 03:41:04.081358 1043928 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0730 03:41:04.128324 1043928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0730 03:41:04.133122 1043928 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0730 03:41:04.133199 1043928 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
W0730 03:41:04.164932 1043928 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0730 03:41:04.164967 1043928 retry.go:31] will retry after 328.420424ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0730 03:41:04.171074 1043928 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0730 03:41:04.171151 1043928 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0730 03:41:04.207053 1043928 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0730 03:41:04.207077 1043928 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0730 03:41:04.239468 1043928 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0730 03:41:04.239493 1043928 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
W0730 03:41:04.246297 1043928 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0730 03:41:04.246331 1043928 retry.go:31] will retry after 369.388219ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0730 03:41:04.260302 1043928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0730 03:41:04.334375 1043928 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0730 03:41:04.334410 1043928 retry.go:31] will retry after 344.912712ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0730 03:41:04.386638 1043928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0730 03:41:04.455424 1043928 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0730 03:41:04.455457 1043928 retry.go:31] will retry after 217.153856ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0730 03:41:04.493752 1043928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0730 03:41:04.570622 1043928 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0730 03:41:04.570655 1043928 retry.go:31] will retry after 469.70824ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0730 03:41:04.616881 1043928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0730 03:41:04.673297 1043928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0730 03:41:04.679689 1043928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0730 03:41:04.692327 1043928 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0730 03:41:04.692362 1043928 retry.go:31] will retry after 296.925737ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0730 03:41:04.788501 1043928 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0730 03:41:04.788533 1043928 retry.go:31] will retry after 718.052017ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0730 03:41:04.790029 1043928 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0730 03:41:04.790062 1043928 retry.go:31] will retry after 525.288514ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0730 03:41:04.989901 1043928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0730 03:41:05.041338 1043928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0730 03:41:05.081821 1043928 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0730 03:41:05.081891 1043928 retry.go:31] will retry after 544.518168ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0730 03:41:05.131501 1043928 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0730 03:41:05.131534 1043928 retry.go:31] will retry after 739.187466ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0730 03:41:05.316181 1043928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0730 03:41:05.422524 1043928 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0730 03:41:05.422572 1043928 retry.go:31] will retry after 314.691425ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0730 03:41:05.507768 1043928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0730 03:41:05.627137 1043928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0730 03:41:05.654068 1043928 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0730 03:41:05.654102 1043928 retry.go:31] will retry after 1.060765138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0730 03:41:05.738247 1043928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0730 03:41:05.806274 1043928 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0730 03:41:05.806371 1043928 retry.go:31] will retry after 1.176326896s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0730 03:41:05.815936 1043928 node_ready.go:53] error getting node "old-k8s-version-111858": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-111858": dial tcp 192.168.85.2:8443: connect: connection refused
I0730 03:41:05.871179 1043928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0730 03:41:05.909163 1043928 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0730 03:41:05.909242 1043928 retry.go:31] will retry after 780.859734ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0730 03:41:06.011155 1043928 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0730 03:41:06.011240 1043928 retry.go:31] will retry after 1.098930829s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0730 03:41:06.690327 1043928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0730 03:41:06.715810 1043928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0730 03:41:06.946166 1043928 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0730 03:41:06.946260 1043928 retry.go:31] will retry after 1.293774728s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0730 03:41:06.958940 1043928 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0730 03:41:06.958973 1043928 retry.go:31] will retry after 889.682206ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0730 03:41:06.983258 1043928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0730 03:41:07.110989 1043928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0730 03:41:07.134353 1043928 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0730 03:41:07.134451 1043928 retry.go:31] will retry after 1.609583313s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0730 03:41:07.297066 1043928 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0730 03:41:07.297139 1043928 retry.go:31] will retry after 1.52634941s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0730 03:41:07.816185 1043928 node_ready.go:53] error getting node "old-k8s-version-111858": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-111858": dial tcp 192.168.85.2:8443: connect: connection refused
I0730 03:41:07.849496 1043928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0730 03:41:08.093681 1043928 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0730 03:41:08.093723 1043928 retry.go:31] will retry after 1.548139941s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0730 03:41:08.241121 1043928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0730 03:41:08.744211 1043928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0730 03:41:08.824519 1043928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0730 03:41:09.642759 1043928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0730 03:41:18.277282 1043928 node_ready.go:49] node "old-k8s-version-111858" has status "Ready":"True"
I0730 03:41:18.277312 1043928 node_ready.go:38] duration metric: took 14.461981495s for node "old-k8s-version-111858" to be "Ready" ...
I0730 03:41:18.277323 1043928 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0730 03:41:18.493686 1043928 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-t6jx8" in "kube-system" namespace to be "Ready" ...
I0730 03:41:18.594671 1043928 pod_ready.go:92] pod "coredns-74ff55c5b-t6jx8" in "kube-system" namespace has status "Ready":"True"
I0730 03:41:18.594697 1043928 pod_ready.go:81] duration metric: took 100.971623ms for pod "coredns-74ff55c5b-t6jx8" in "kube-system" namespace to be "Ready" ...
I0730 03:41:18.594710 1043928 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-111858" in "kube-system" namespace to be "Ready" ...
I0730 03:41:18.686572 1043928 pod_ready.go:92] pod "etcd-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"True"
I0730 03:41:18.686600 1043928 pod_ready.go:81] duration metric: took 91.866548ms for pod "etcd-old-k8s-version-111858" in "kube-system" namespace to be "Ready" ...
I0730 03:41:18.686613 1043928 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-111858" in "kube-system" namespace to be "Ready" ...
I0730 03:41:18.738799 1043928 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"True"
I0730 03:41:18.738819 1043928 pod_ready.go:81] duration metric: took 52.198946ms for pod "kube-apiserver-old-k8s-version-111858" in "kube-system" namespace to be "Ready" ...
I0730 03:41:18.738841 1043928 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-111858" in "kube-system" namespace to be "Ready" ...
I0730 03:41:20.721065 1043928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (12.479892457s)
I0730 03:41:20.721308 1043928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.977061355s)
I0730 03:41:20.721330 1043928 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-111858"
I0730 03:41:20.721373 1043928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (11.896817295s)
I0730 03:41:20.721689 1043928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.0789028s)
I0730 03:41:20.723344 1043928 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-111858 addons enable metrics-server
I0730 03:41:20.730000 1043928 out.go:177] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
I0730 03:41:20.733046 1043928 addons.go:510] duration metric: took 17.126458148s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
I0730 03:41:20.746294 1043928 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"False"
I0730 03:41:23.245803 1043928 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"False"
I0730 03:41:25.246609 1043928 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"False"
I0730 03:41:27.247256 1043928 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"False"
I0730 03:41:29.745433 1043928 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"False"
I0730 03:41:32.246260 1043928 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"False"
I0730 03:41:34.744870 1043928 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"False"
I0730 03:41:36.857289 1043928 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"False"
I0730 03:41:39.247868 1043928 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"False"
I0730 03:41:41.746354 1043928 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"False"
I0730 03:41:43.746910 1043928 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"False"
I0730 03:41:46.245461 1043928 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"False"
I0730 03:41:48.295322 1043928 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"False"
I0730 03:41:50.745715 1043928 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"False"
I0730 03:41:53.245522 1043928 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"False"
I0730 03:41:55.248836 1043928 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"False"
I0730 03:41:57.746735 1043928 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"False"
I0730 03:42:00.251054 1043928 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"False"
I0730 03:42:02.745449 1043928 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"False"
I0730 03:42:04.745734 1043928 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"False"
I0730 03:42:07.245623 1043928 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"False"
I0730 03:42:09.745490 1043928 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"False"
I0730 03:42:11.745895 1043928 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"False"
I0730 03:42:14.245287 1043928 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"False"
I0730 03:42:16.246359 1043928 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"False"
I0730 03:42:18.745933 1043928 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"False"
I0730 03:42:20.746857 1043928 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"False"
I0730 03:42:23.246802 1043928 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"False"
I0730 03:42:25.746160 1043928 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"False"
I0730 03:42:27.746785 1043928 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"False"
I0730 03:42:30.245722 1043928 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"False"
I0730 03:42:32.744433 1043928 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"False"
I0730 03:42:34.745192 1043928 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"False"
I0730 03:42:35.249981 1043928 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"True"
I0730 03:42:35.250007 1043928 pod_ready.go:81] duration metric: took 1m16.511156357s for pod "kube-controller-manager-old-k8s-version-111858" in "kube-system" namespace to be "Ready" ...
I0730 03:42:35.250020 1043928 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6kqkd" in "kube-system" namespace to be "Ready" ...
I0730 03:42:35.258656 1043928 pod_ready.go:92] pod "kube-proxy-6kqkd" in "kube-system" namespace has status "Ready":"True"
I0730 03:42:35.258684 1043928 pod_ready.go:81] duration metric: took 8.655733ms for pod "kube-proxy-6kqkd" in "kube-system" namespace to be "Ready" ...
I0730 03:42:35.258697 1043928 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-111858" in "kube-system" namespace to be "Ready" ...
I0730 03:42:37.265726 1043928 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"False"
I0730 03:42:39.765218 1043928 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"False"
I0730 03:42:42.265471 1043928 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"False"
I0730 03:42:44.264425 1043928 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-111858" in "kube-system" namespace has status "Ready":"True"
I0730 03:42:44.264450 1043928 pod_ready.go:81] duration metric: took 9.005744036s for pod "kube-scheduler-old-k8s-version-111858" in "kube-system" namespace to be "Ready" ...
I0730 03:42:44.264461 1043928 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace to be "Ready" ...
I0730 03:42:46.271134 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:42:48.277135 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:42:50.771835 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:42:52.772468 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:42:55.270547 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:42:57.271094 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:42:59.771183 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:43:02.270961 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:43:04.770349 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:43:06.780082 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:43:09.270701 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:43:11.271342 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:43:13.770959 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:43:15.771347 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:43:18.274605 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:43:20.770783 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:43:22.771452 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:43:25.271086 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:43:27.271839 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:43:29.771402 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:43:32.271475 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:43:34.770994 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:43:37.269974 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:43:39.270763 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:43:41.771290 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:43:44.271094 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:43:46.770047 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:43:48.771342 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:43:51.271055 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:43:53.770567 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:43:55.771238 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:43:58.271370 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:44:00.319079 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:44:02.770510 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:44:04.771563 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:44:07.271184 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:44:09.271221 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:44:11.771418 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:44:13.778507 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:44:16.276843 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:44:18.770880 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:44:21.270686 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:44:23.770981 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:44:26.271827 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:44:28.771541 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:44:30.771601 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:44:32.772233 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:44:35.271227 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:44:37.271488 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:44:39.809746 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:44:42.272773 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:44:44.771123 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:44:46.771293 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:44:49.274000 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:44:51.771481 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:44:53.772426 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:44:56.271029 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:44:58.771577 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:45:00.773072 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:45:03.271325 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:45:05.271950 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:45:07.770678 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:45:09.770978 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:45:11.771629 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:45:14.270140 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:45:16.270830 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:45:18.770675 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:45:20.774679 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:45:23.284295 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:45:25.284927 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:45:27.771030 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:45:29.771965 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:45:32.270973 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:45:34.271387 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:45:36.271507 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:45:38.271855 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:45:40.441635 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:45:42.773194 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:45:45.278381 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:45:47.771690 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:45:50.274883 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:45:52.770260 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:45:54.775347 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:45:57.271472 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:45:59.271796 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:46:01.272147 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:46:03.771419 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:46:06.270794 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:46:08.270882 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:46:10.770563 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:46:12.771023 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:46:15.271504 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:46:17.770247 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:46:19.770926 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:46:21.772013 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:46:23.772058 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:46:26.271239 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:46:28.770946 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:46:30.771152 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:46:32.771223 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:46:34.772803 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:46:37.270724 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:46:39.275136 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:46:41.771485 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:46:44.286631 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:46:44.286659 1043928 pod_ready.go:81] duration metric: took 4m0.022189423s for pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace to be "Ready" ...
E0730 03:46:44.286669 1043928 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
I0730 03:46:44.286677 1043928 pod_ready.go:38] duration metric: took 5m26.009343714s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0730 03:46:44.286695 1043928 api_server.go:52] waiting for apiserver process to appear ...
I0730 03:46:44.286767 1043928 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0730 03:46:44.316796 1043928 logs.go:276] 2 containers: [d68a1084ef5a d1dbee0d1be9]
I0730 03:46:44.316916 1043928 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0730 03:46:44.346172 1043928 logs.go:276] 2 containers: [ec0cdba2249f 7015c3abc9b9]
I0730 03:46:44.346258 1043928 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0730 03:46:44.382340 1043928 logs.go:276] 2 containers: [2cd9c807cd34 b8b5a5f5b2cd]
I0730 03:46:44.382416 1043928 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0730 03:46:44.408911 1043928 logs.go:276] 2 containers: [ad56e41faf1a 15a7c60d1f7b]
I0730 03:46:44.408986 1043928 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0730 03:46:44.443353 1043928 logs.go:276] 2 containers: [5d46f562a227 0d145599f470]
I0730 03:46:44.443431 1043928 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0730 03:46:44.494693 1043928 logs.go:276] 2 containers: [27b9c281f645 81dd56fb259d]
I0730 03:46:44.494837 1043928 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0730 03:46:44.523644 1043928 logs.go:276] 0 containers: []
W0730 03:46:44.528436 1043928 logs.go:278] No container was found matching "kindnet"
I0730 03:46:44.528527 1043928 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0730 03:46:44.563892 1043928 logs.go:276] 1 containers: [99de9be3f2f4]
I0730 03:46:44.564007 1043928 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0730 03:46:44.594823 1043928 logs.go:276] 2 containers: [0ab09fb9a8c8 5805a5344565]
I0730 03:46:44.594867 1043928 logs.go:123] Gathering logs for kube-scheduler [15a7c60d1f7b] ...
I0730 03:46:44.594898 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a7c60d1f7b"
I0730 03:46:44.649215 1043928 logs.go:123] Gathering logs for kube-proxy [0d145599f470] ...
I0730 03:46:44.649279 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d145599f470"
I0730 03:46:44.684539 1043928 logs.go:123] Gathering logs for storage-provisioner [5805a5344565] ...
I0730 03:46:44.684620 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5805a5344565"
I0730 03:46:44.718639 1043928 logs.go:123] Gathering logs for dmesg ...
I0730 03:46:44.718714 1043928 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0730 03:46:44.749530 1043928 logs.go:123] Gathering logs for kube-apiserver [d1dbee0d1be9] ...
I0730 03:46:44.749719 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1dbee0d1be9"
I0730 03:46:44.873389 1043928 logs.go:123] Gathering logs for coredns [b8b5a5f5b2cd] ...
I0730 03:46:44.873472 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b5a5f5b2cd"
I0730 03:46:44.926764 1043928 logs.go:123] Gathering logs for kube-scheduler [ad56e41faf1a] ...
I0730 03:46:44.926844 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad56e41faf1a"
I0730 03:46:44.957477 1043928 logs.go:123] Gathering logs for kubernetes-dashboard [99de9be3f2f4] ...
I0730 03:46:44.957508 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99de9be3f2f4"
I0730 03:46:44.999678 1043928 logs.go:123] Gathering logs for kube-apiserver [d68a1084ef5a] ...
I0730 03:46:44.999709 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68a1084ef5a"
I0730 03:46:45.090365 1043928 logs.go:123] Gathering logs for etcd [7015c3abc9b9] ...
I0730 03:46:45.090424 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7015c3abc9b9"
I0730 03:46:45.159871 1043928 logs.go:123] Gathering logs for coredns [2cd9c807cd34] ...
I0730 03:46:45.159910 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cd9c807cd34"
I0730 03:46:45.191655 1043928 logs.go:123] Gathering logs for kube-controller-manager [81dd56fb259d] ...
I0730 03:46:45.191690 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81dd56fb259d"
I0730 03:46:45.295298 1043928 logs.go:123] Gathering logs for kubelet ...
I0730 03:46:45.295337 1043928 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0730 03:46:45.392951 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:18 old-k8s-version-111858 kubelet[1359]: E0730 03:41:18.315016 1359 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-111858" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-111858' and this object
W0730 03:46:45.393191 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:18 old-k8s-version-111858 kubelet[1359]: E0730 03:41:18.315138 1359 reflector.go:138] object-"kube-system"/"kube-proxy-token-jp96h": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-jp96h" is forbidden: User "system:node:old-k8s-version-111858" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-111858' and this object
W0730 03:46:45.400189 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:20 old-k8s-version-111858 kubelet[1359]: E0730 03:41:20.353061 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0730 03:46:45.401009 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:20 old-k8s-version-111858 kubelet[1359]: E0730 03:41:20.398187 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.401532 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:21 old-k8s-version-111858 kubelet[1359]: E0730 03:41:21.431634 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.408332 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:34 old-k8s-version-111858 kubelet[1359]: E0730 03:41:34.817700 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0730 03:46:45.408686 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:36 old-k8s-version-111858 kubelet[1359]: E0730 03:41:36.897125 1359 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-rrwkc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-rrwkc" is forbidden: User "system:node:old-k8s-version-111858" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-111858' and this object
W0730 03:46:45.413078 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:44 old-k8s-version-111858 kubelet[1359]: E0730 03:41:44.313565 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0730 03:46:45.413461 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:44 old-k8s-version-111858 kubelet[1359]: E0730 03:41:44.681668 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.413655 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:46 old-k8s-version-111858 kubelet[1359]: E0730 03:41:46.797782 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.414313 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:51 old-k8s-version-111858 kubelet[1359]: E0730 03:41:51.772734 1359 pod_workers.go:191] Error syncing pod 61977ceb-fabc-4963-9a9a-a69ce9b13905 ("storage-provisioner_kube-system(61977ceb-fabc-4963-9a9a-a69ce9b13905)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(61977ceb-fabc-4963-9a9a-a69ce9b13905)"
W0730 03:46:45.421683 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:58 old-k8s-version-111858 kubelet[1359]: E0730 03:41:58.482463 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0730 03:46:45.423804 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:58 old-k8s-version-111858 kubelet[1359]: E0730 03:41:58.509101 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0730 03:46:45.424136 1043928 logs.go:138] Found kubelet problem: Jul 30 03:42:09 old-k8s-version-111858 kubelet[1359]: E0730 03:42:09.796542 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.424327 1043928 logs.go:138] Found kubelet problem: Jul 30 03:42:11 old-k8s-version-111858 kubelet[1359]: E0730 03:42:11.796425 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.426607 1043928 logs.go:138] Found kubelet problem: Jul 30 03:42:24 old-k8s-version-111858 kubelet[1359]: E0730 03:42:24.376186 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0730 03:46:45.426796 1043928 logs.go:138] Found kubelet problem: Jul 30 03:42:24 old-k8s-version-111858 kubelet[1359]: E0730 03:42:24.804673 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.426982 1043928 logs.go:138] Found kubelet problem: Jul 30 03:42:35 old-k8s-version-111858 kubelet[1359]: E0730 03:42:35.796278 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.427179 1043928 logs.go:138] Found kubelet problem: Jul 30 03:42:36 old-k8s-version-111858 kubelet[1359]: E0730 03:42:36.797986 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.429273 1043928 logs.go:138] Found kubelet problem: Jul 30 03:42:50 old-k8s-version-111858 kubelet[1359]: E0730 03:42:50.833614 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0730 03:46:45.429471 1043928 logs.go:138] Found kubelet problem: Jul 30 03:42:51 old-k8s-version-111858 kubelet[1359]: E0730 03:42:51.803823 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.430812 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:02 old-k8s-version-111858 kubelet[1359]: E0730 03:43:02.798080 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.431024 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:04 old-k8s-version-111858 kubelet[1359]: E0730 03:43:04.796877 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.431220 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:15 old-k8s-version-111858 kubelet[1359]: E0730 03:43:15.797776 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.433472 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:16 old-k8s-version-111858 kubelet[1359]: E0730 03:43:16.367003 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0730 03:46:45.433691 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:27 old-k8s-version-111858 kubelet[1359]: E0730 03:43:27.796546 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.433889 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:31 old-k8s-version-111858 kubelet[1359]: E0730 03:43:31.796344 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.434082 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:41 old-k8s-version-111858 kubelet[1359]: E0730 03:43:41.796601 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.434281 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:46 old-k8s-version-111858 kubelet[1359]: E0730 03:43:46.800531 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.434465 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:53 old-k8s-version-111858 kubelet[1359]: E0730 03:43:53.796486 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.434661 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:57 old-k8s-version-111858 kubelet[1359]: E0730 03:43:57.804900 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.434845 1043928 logs.go:138] Found kubelet problem: Jul 30 03:44:08 old-k8s-version-111858 kubelet[1359]: E0730 03:44:08.799152 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.435043 1043928 logs.go:138] Found kubelet problem: Jul 30 03:44:09 old-k8s-version-111858 kubelet[1359]: E0730 03:44:09.796891 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.439098 1043928 logs.go:138] Found kubelet problem: Jul 30 03:44:19 old-k8s-version-111858 kubelet[1359]: E0730 03:44:19.811955 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0730 03:46:45.439329 1043928 logs.go:138] Found kubelet problem: Jul 30 03:44:20 old-k8s-version-111858 kubelet[1359]: E0730 03:44:20.796408 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.439533 1043928 logs.go:138] Found kubelet problem: Jul 30 03:44:31 old-k8s-version-111858 kubelet[1359]: E0730 03:44:31.796408 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.439718 1043928 logs.go:138] Found kubelet problem: Jul 30 03:44:32 old-k8s-version-111858 kubelet[1359]: E0730 03:44:32.804546 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.441957 1043928 logs.go:138] Found kubelet problem: Jul 30 03:44:46 old-k8s-version-111858 kubelet[1359]: E0730 03:44:46.480773 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0730 03:46:45.442166 1043928 logs.go:138] Found kubelet problem: Jul 30 03:44:47 old-k8s-version-111858 kubelet[1359]: E0730 03:44:47.796662 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.442374 1043928 logs.go:138] Found kubelet problem: Jul 30 03:44:57 old-k8s-version-111858 kubelet[1359]: E0730 03:44:57.796558 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.442559 1043928 logs.go:138] Found kubelet problem: Jul 30 03:45:01 old-k8s-version-111858 kubelet[1359]: E0730 03:45:01.796960 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.442755 1043928 logs.go:138] Found kubelet problem: Jul 30 03:45:08 old-k8s-version-111858 kubelet[1359]: E0730 03:45:08.804919 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.442941 1043928 logs.go:138] Found kubelet problem: Jul 30 03:45:12 old-k8s-version-111858 kubelet[1359]: E0730 03:45:12.797499 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.443138 1043928 logs.go:138] Found kubelet problem: Jul 30 03:45:23 old-k8s-version-111858 kubelet[1359]: E0730 03:45:23.796294 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.443332 1043928 logs.go:138] Found kubelet problem: Jul 30 03:45:23 old-k8s-version-111858 kubelet[1359]: E0730 03:45:23.801724 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.444699 1043928 logs.go:138] Found kubelet problem: Jul 30 03:45:35 old-k8s-version-111858 kubelet[1359]: E0730 03:45:35.796294 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.444898 1043928 logs.go:138] Found kubelet problem: Jul 30 03:45:35 old-k8s-version-111858 kubelet[1359]: E0730 03:45:35.797085 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.445097 1043928 logs.go:138] Found kubelet problem: Jul 30 03:45:46 old-k8s-version-111858 kubelet[1359]: E0730 03:45:46.800946 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.445289 1043928 logs.go:138] Found kubelet problem: Jul 30 03:45:50 old-k8s-version-111858 kubelet[1359]: E0730 03:45:50.798282 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.445490 1043928 logs.go:138] Found kubelet problem: Jul 30 03:46:01 old-k8s-version-111858 kubelet[1359]: E0730 03:46:01.796604 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.445692 1043928 logs.go:138] Found kubelet problem: Jul 30 03:46:02 old-k8s-version-111858 kubelet[1359]: E0730 03:46:02.834688 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.445891 1043928 logs.go:138] Found kubelet problem: Jul 30 03:46:14 old-k8s-version-111858 kubelet[1359]: E0730 03:46:14.810612 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.446086 1043928 logs.go:138] Found kubelet problem: Jul 30 03:46:17 old-k8s-version-111858 kubelet[1359]: E0730 03:46:17.796583 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.446283 1043928 logs.go:138] Found kubelet problem: Jul 30 03:46:26 old-k8s-version-111858 kubelet[1359]: E0730 03:46:26.796960 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.446506 1043928 logs.go:138] Found kubelet problem: Jul 30 03:46:32 old-k8s-version-111858 kubelet[1359]: E0730 03:46:32.801040 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.446705 1043928 logs.go:138] Found kubelet problem: Jul 30 03:46:39 old-k8s-version-111858 kubelet[1359]: E0730 03:46:39.796520 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.446892 1043928 logs.go:138] Found kubelet problem: Jul 30 03:46:43 old-k8s-version-111858 kubelet[1359]: E0730 03:46:43.796586 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0730 03:46:45.446903 1043928 logs.go:123] Gathering logs for etcd [ec0cdba2249f] ...
I0730 03:46:45.446918 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0cdba2249f"
I0730 03:46:45.505695 1043928 logs.go:123] Gathering logs for kube-proxy [5d46f562a227] ...
I0730 03:46:45.505728 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d46f562a227"
I0730 03:46:45.554881 1043928 logs.go:123] Gathering logs for Docker ...
I0730 03:46:45.554967 1043928 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0730 03:46:45.591910 1043928 logs.go:123] Gathering logs for container status ...
I0730 03:46:45.595756 1043928 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0730 03:46:45.695314 1043928 logs.go:123] Gathering logs for describe nodes ...
I0730 03:46:45.695408 1043928 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0730 03:46:45.983814 1043928 logs.go:123] Gathering logs for kube-controller-manager [27b9c281f645] ...
I0730 03:46:45.983886 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b9c281f645"
I0730 03:46:46.074444 1043928 logs.go:123] Gathering logs for storage-provisioner [0ab09fb9a8c8] ...
I0730 03:46:46.074521 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab09fb9a8c8"
I0730 03:46:46.114837 1043928 out.go:304] Setting ErrFile to fd 2...
I0730 03:46:46.114860 1043928 out.go:338] TERM=,COLORTERM=, which probably does not support color
W0730 03:46:46.114910 1043928 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0730 03:46:46.114918 1043928 out.go:239] Jul 30 03:46:17 old-k8s-version-111858 kubelet[1359]: E0730 03:46:17.796583 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 30 03:46:17 old-k8s-version-111858 kubelet[1359]: E0730 03:46:17.796583 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:46.114925 1043928 out.go:239] Jul 30 03:46:26 old-k8s-version-111858 kubelet[1359]: E0730 03:46:26.796960 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Jul 30 03:46:26 old-k8s-version-111858 kubelet[1359]: E0730 03:46:26.796960 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:46.114933 1043928 out.go:239] Jul 30 03:46:32 old-k8s-version-111858 kubelet[1359]: E0730 03:46:32.801040 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 30 03:46:32 old-k8s-version-111858 kubelet[1359]: E0730 03:46:32.801040 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:46.114941 1043928 out.go:239] Jul 30 03:46:39 old-k8s-version-111858 kubelet[1359]: E0730 03:46:39.796520 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Jul 30 03:46:39 old-k8s-version-111858 kubelet[1359]: E0730 03:46:39.796520 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:46.114957 1043928 out.go:239] Jul 30 03:46:43 old-k8s-version-111858 kubelet[1359]: E0730 03:46:43.796586 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 30 03:46:43 old-k8s-version-111858 kubelet[1359]: E0730 03:46:43.796586 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0730 03:46:46.114965 1043928 out.go:304] Setting ErrFile to fd 2...
I0730 03:46:46.114971 1043928 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0730 03:46:56.115321 1043928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0730 03:46:56.132355 1043928 api_server.go:72] duration metric: took 5m52.526084127s to wait for apiserver process to appear ...
I0730 03:46:56.132379 1043928 api_server.go:88] waiting for apiserver healthz status ...
I0730 03:46:56.132462 1043928 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0730 03:46:56.167430 1043928 logs.go:276] 2 containers: [d68a1084ef5a d1dbee0d1be9]
I0730 03:46:56.167506 1043928 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0730 03:46:56.196491 1043928 logs.go:276] 2 containers: [ec0cdba2249f 7015c3abc9b9]
I0730 03:46:56.196587 1043928 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0730 03:46:56.229447 1043928 logs.go:276] 2 containers: [2cd9c807cd34 b8b5a5f5b2cd]
I0730 03:46:56.229534 1043928 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0730 03:46:56.252389 1043928 logs.go:276] 2 containers: [ad56e41faf1a 15a7c60d1f7b]
I0730 03:46:56.252470 1043928 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0730 03:46:56.294541 1043928 logs.go:276] 2 containers: [5d46f562a227 0d145599f470]
I0730 03:46:56.294616 1043928 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0730 03:46:56.339108 1043928 logs.go:276] 2 containers: [27b9c281f645 81dd56fb259d]
I0730 03:46:56.339193 1043928 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0730 03:46:56.384112 1043928 logs.go:276] 0 containers: []
W0730 03:46:56.384135 1043928 logs.go:278] No container was found matching "kindnet"
I0730 03:46:56.384195 1043928 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0730 03:46:56.425653 1043928 logs.go:276] 2 containers: [0ab09fb9a8c8 5805a5344565]
I0730 03:46:56.425733 1043928 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0730 03:46:56.456660 1043928 logs.go:276] 1 containers: [99de9be3f2f4]
I0730 03:46:56.456695 1043928 logs.go:123] Gathering logs for kube-scheduler [ad56e41faf1a] ...
I0730 03:46:56.456707 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad56e41faf1a"
I0730 03:46:56.507485 1043928 logs.go:123] Gathering logs for storage-provisioner [5805a5344565] ...
I0730 03:46:56.507618 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5805a5344565"
I0730 03:46:56.539413 1043928 logs.go:123] Gathering logs for container status ...
I0730 03:46:56.539512 1043928 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0730 03:46:56.635429 1043928 logs.go:123] Gathering logs for kube-scheduler [15a7c60d1f7b] ...
I0730 03:46:56.635545 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a7c60d1f7b"
I0730 03:46:56.668779 1043928 logs.go:123] Gathering logs for storage-provisioner [0ab09fb9a8c8] ...
I0730 03:46:56.668811 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab09fb9a8c8"
I0730 03:46:56.695834 1043928 logs.go:123] Gathering logs for Docker ...
I0730 03:46:56.695863 1043928 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0730 03:46:56.727593 1043928 logs.go:123] Gathering logs for kubelet ...
I0730 03:46:56.727797 1043928 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0730 03:46:56.785122 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:18 old-k8s-version-111858 kubelet[1359]: E0730 03:41:18.315016 1359 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-111858" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-111858' and this object
W0730 03:46:56.785365 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:18 old-k8s-version-111858 kubelet[1359]: E0730 03:41:18.315138 1359 reflector.go:138] object-"kube-system"/"kube-proxy-token-jp96h": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-jp96h" is forbidden: User "system:node:old-k8s-version-111858" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-111858' and this object
W0730 03:46:56.792338 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:20 old-k8s-version-111858 kubelet[1359]: E0730 03:41:20.353061 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0730 03:46:56.793137 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:20 old-k8s-version-111858 kubelet[1359]: E0730 03:41:20.398187 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.793676 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:21 old-k8s-version-111858 kubelet[1359]: E0730 03:41:21.431634 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.800615 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:34 old-k8s-version-111858 kubelet[1359]: E0730 03:41:34.817700 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0730 03:46:56.800973 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:36 old-k8s-version-111858 kubelet[1359]: E0730 03:41:36.897125 1359 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-rrwkc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-rrwkc" is forbidden: User "system:node:old-k8s-version-111858" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-111858' and this object
W0730 03:46:56.805333 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:44 old-k8s-version-111858 kubelet[1359]: E0730 03:41:44.313565 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0730 03:46:56.805720 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:44 old-k8s-version-111858 kubelet[1359]: E0730 03:41:44.681668 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.805907 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:46 old-k8s-version-111858 kubelet[1359]: E0730 03:41:46.797782 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.806553 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:51 old-k8s-version-111858 kubelet[1359]: E0730 03:41:51.772734 1359 pod_workers.go:191] Error syncing pod 61977ceb-fabc-4963-9a9a-a69ce9b13905 ("storage-provisioner_kube-system(61977ceb-fabc-4963-9a9a-a69ce9b13905)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(61977ceb-fabc-4963-9a9a-a69ce9b13905)"
W0730 03:46:56.809253 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:58 old-k8s-version-111858 kubelet[1359]: E0730 03:41:58.482463 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0730 03:46:56.811328 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:58 old-k8s-version-111858 kubelet[1359]: E0730 03:41:58.509101 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0730 03:46:56.811659 1043928 logs.go:138] Found kubelet problem: Jul 30 03:42:09 old-k8s-version-111858 kubelet[1359]: E0730 03:42:09.796542 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.811845 1043928 logs.go:138] Found kubelet problem: Jul 30 03:42:11 old-k8s-version-111858 kubelet[1359]: E0730 03:42:11.796425 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.814240 1043928 logs.go:138] Found kubelet problem: Jul 30 03:42:24 old-k8s-version-111858 kubelet[1359]: E0730 03:42:24.376186 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0730 03:46:56.814466 1043928 logs.go:138] Found kubelet problem: Jul 30 03:42:24 old-k8s-version-111858 kubelet[1359]: E0730 03:42:24.804673 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.814684 1043928 logs.go:138] Found kubelet problem: Jul 30 03:42:35 old-k8s-version-111858 kubelet[1359]: E0730 03:42:35.796278 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.814911 1043928 logs.go:138] Found kubelet problem: Jul 30 03:42:36 old-k8s-version-111858 kubelet[1359]: E0730 03:42:36.797986 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.817181 1043928 logs.go:138] Found kubelet problem: Jul 30 03:42:50 old-k8s-version-111858 kubelet[1359]: E0730 03:42:50.833614 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0730 03:46:56.817389 1043928 logs.go:138] Found kubelet problem: Jul 30 03:42:51 old-k8s-version-111858 kubelet[1359]: E0730 03:42:51.803823 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.817596 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:02 old-k8s-version-111858 kubelet[1359]: E0730 03:43:02.798080 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.817808 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:04 old-k8s-version-111858 kubelet[1359]: E0730 03:43:04.796877 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.817999 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:15 old-k8s-version-111858 kubelet[1359]: E0730 03:43:15.797776 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.820222 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:16 old-k8s-version-111858 kubelet[1359]: E0730 03:43:16.367003 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0730 03:46:56.820407 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:27 old-k8s-version-111858 kubelet[1359]: E0730 03:43:27.796546 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.820602 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:31 old-k8s-version-111858 kubelet[1359]: E0730 03:43:31.796344 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.820788 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:41 old-k8s-version-111858 kubelet[1359]: E0730 03:43:41.796601 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.820985 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:46 old-k8s-version-111858 kubelet[1359]: E0730 03:43:46.800531 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.821170 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:53 old-k8s-version-111858 kubelet[1359]: E0730 03:43:53.796486 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.821367 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:57 old-k8s-version-111858 kubelet[1359]: E0730 03:43:57.804900 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.821552 1043928 logs.go:138] Found kubelet problem: Jul 30 03:44:08 old-k8s-version-111858 kubelet[1359]: E0730 03:44:08.799152 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.821754 1043928 logs.go:138] Found kubelet problem: Jul 30 03:44:09 old-k8s-version-111858 kubelet[1359]: E0730 03:44:09.796891 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.823824 1043928 logs.go:138] Found kubelet problem: Jul 30 03:44:19 old-k8s-version-111858 kubelet[1359]: E0730 03:44:19.811955 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0730 03:46:56.824020 1043928 logs.go:138] Found kubelet problem: Jul 30 03:44:20 old-k8s-version-111858 kubelet[1359]: E0730 03:44:20.796408 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.824218 1043928 logs.go:138] Found kubelet problem: Jul 30 03:44:31 old-k8s-version-111858 kubelet[1359]: E0730 03:44:31.796408 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.824402 1043928 logs.go:138] Found kubelet problem: Jul 30 03:44:32 old-k8s-version-111858 kubelet[1359]: E0730 03:44:32.804546 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.826629 1043928 logs.go:138] Found kubelet problem: Jul 30 03:44:46 old-k8s-version-111858 kubelet[1359]: E0730 03:44:46.480773 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0730 03:46:56.826814 1043928 logs.go:138] Found kubelet problem: Jul 30 03:44:47 old-k8s-version-111858 kubelet[1359]: E0730 03:44:47.796662 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.827009 1043928 logs.go:138] Found kubelet problem: Jul 30 03:44:57 old-k8s-version-111858 kubelet[1359]: E0730 03:44:57.796558 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.827195 1043928 logs.go:138] Found kubelet problem: Jul 30 03:45:01 old-k8s-version-111858 kubelet[1359]: E0730 03:45:01.796960 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.827390 1043928 logs.go:138] Found kubelet problem: Jul 30 03:45:08 old-k8s-version-111858 kubelet[1359]: E0730 03:45:08.804919 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.827591 1043928 logs.go:138] Found kubelet problem: Jul 30 03:45:12 old-k8s-version-111858 kubelet[1359]: E0730 03:45:12.797499 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.827790 1043928 logs.go:138] Found kubelet problem: Jul 30 03:45:23 old-k8s-version-111858 kubelet[1359]: E0730 03:45:23.796294 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.827976 1043928 logs.go:138] Found kubelet problem: Jul 30 03:45:23 old-k8s-version-111858 kubelet[1359]: E0730 03:45:23.801724 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.828173 1043928 logs.go:138] Found kubelet problem: Jul 30 03:45:35 old-k8s-version-111858 kubelet[1359]: E0730 03:45:35.796294 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.828357 1043928 logs.go:138] Found kubelet problem: Jul 30 03:45:35 old-k8s-version-111858 kubelet[1359]: E0730 03:45:35.797085 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.828552 1043928 logs.go:138] Found kubelet problem: Jul 30 03:45:46 old-k8s-version-111858 kubelet[1359]: E0730 03:45:46.800946 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.828736 1043928 logs.go:138] Found kubelet problem: Jul 30 03:45:50 old-k8s-version-111858 kubelet[1359]: E0730 03:45:50.798282 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.828934 1043928 logs.go:138] Found kubelet problem: Jul 30 03:46:01 old-k8s-version-111858 kubelet[1359]: E0730 03:46:01.796604 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.829117 1043928 logs.go:138] Found kubelet problem: Jul 30 03:46:02 old-k8s-version-111858 kubelet[1359]: E0730 03:46:02.834688 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.829312 1043928 logs.go:138] Found kubelet problem: Jul 30 03:46:14 old-k8s-version-111858 kubelet[1359]: E0730 03:46:14.810612 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.829498 1043928 logs.go:138] Found kubelet problem: Jul 30 03:46:17 old-k8s-version-111858 kubelet[1359]: E0730 03:46:17.796583 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.829699 1043928 logs.go:138] Found kubelet problem: Jul 30 03:46:26 old-k8s-version-111858 kubelet[1359]: E0730 03:46:26.796960 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.829883 1043928 logs.go:138] Found kubelet problem: Jul 30 03:46:32 old-k8s-version-111858 kubelet[1359]: E0730 03:46:32.801040 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.830082 1043928 logs.go:138] Found kubelet problem: Jul 30 03:46:39 old-k8s-version-111858 kubelet[1359]: E0730 03:46:39.796520 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.830266 1043928 logs.go:138] Found kubelet problem: Jul 30 03:46:43 old-k8s-version-111858 kubelet[1359]: E0730 03:46:43.796586 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.830463 1043928 logs.go:138] Found kubelet problem: Jul 30 03:46:51 old-k8s-version-111858 kubelet[1359]: E0730 03:46:51.796690 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
I0730 03:46:56.830476 1043928 logs.go:123] Gathering logs for kube-apiserver [d1dbee0d1be9] ...
I0730 03:46:56.830491 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1dbee0d1be9"
I0730 03:46:56.910378 1043928 logs.go:123] Gathering logs for coredns [b8b5a5f5b2cd] ...
I0730 03:46:56.910420 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b5a5f5b2cd"
I0730 03:46:56.941772 1043928 logs.go:123] Gathering logs for kube-proxy [5d46f562a227] ...
I0730 03:46:56.941803 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d46f562a227"
I0730 03:46:56.963900 1043928 logs.go:123] Gathering logs for kube-controller-manager [81dd56fb259d] ...
I0730 03:46:56.963931 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81dd56fb259d"
I0730 03:46:57.029152 1043928 logs.go:123] Gathering logs for kube-apiserver [d68a1084ef5a] ...
I0730 03:46:57.029275 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68a1084ef5a"
I0730 03:46:57.089335 1043928 logs.go:123] Gathering logs for etcd [7015c3abc9b9] ...
I0730 03:46:57.089375 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7015c3abc9b9"
I0730 03:46:57.123687 1043928 logs.go:123] Gathering logs for coredns [2cd9c807cd34] ...
I0730 03:46:57.123721 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cd9c807cd34"
I0730 03:46:57.154762 1043928 logs.go:123] Gathering logs for kube-proxy [0d145599f470] ...
I0730 03:46:57.154835 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d145599f470"
I0730 03:46:57.176795 1043928 logs.go:123] Gathering logs for kube-controller-manager [27b9c281f645] ...
I0730 03:46:57.176826 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b9c281f645"
I0730 03:46:57.215875 1043928 logs.go:123] Gathering logs for kubernetes-dashboard [99de9be3f2f4] ...
I0730 03:46:57.215912 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99de9be3f2f4"
I0730 03:46:57.251483 1043928 logs.go:123] Gathering logs for dmesg ...
I0730 03:46:57.251512 1043928 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0730 03:46:57.275689 1043928 logs.go:123] Gathering logs for describe nodes ...
I0730 03:46:57.275721 1043928 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0730 03:46:57.516078 1043928 logs.go:123] Gathering logs for etcd [ec0cdba2249f] ...
I0730 03:46:57.516115 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0cdba2249f"
I0730 03:46:57.562197 1043928 out.go:304] Setting ErrFile to fd 2...
I0730 03:46:57.562246 1043928 out.go:338] TERM=,COLORTERM=, which probably does not support color
W0730 03:46:57.562374 1043928 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0730 03:46:57.562391 1043928 out.go:239] Jul 30 03:46:26 old-k8s-version-111858 kubelet[1359]: E0730 03:46:26.796960 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Jul 30 03:46:26 old-k8s-version-111858 kubelet[1359]: E0730 03:46:26.796960 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:57.562436 1043928 out.go:239] Jul 30 03:46:32 old-k8s-version-111858 kubelet[1359]: E0730 03:46:32.801040 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 30 03:46:32 old-k8s-version-111858 kubelet[1359]: E0730 03:46:32.801040 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:57.562450 1043928 out.go:239] Jul 30 03:46:39 old-k8s-version-111858 kubelet[1359]: E0730 03:46:39.796520 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Jul 30 03:46:39 old-k8s-version-111858 kubelet[1359]: E0730 03:46:39.796520 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:57.562464 1043928 out.go:239] Jul 30 03:46:43 old-k8s-version-111858 kubelet[1359]: E0730 03:46:43.796586 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 30 03:46:43 old-k8s-version-111858 kubelet[1359]: E0730 03:46:43.796586 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:57.562471 1043928 out.go:239] Jul 30 03:46:51 old-k8s-version-111858 kubelet[1359]: E0730 03:46:51.796690 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Jul 30 03:46:51 old-k8s-version-111858 kubelet[1359]: E0730 03:46:51.796690 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
I0730 03:46:57.562477 1043928 out.go:304] Setting ErrFile to fd 2...
I0730 03:46:57.562502 1043928 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0730 03:47:07.562678 1043928 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I0730 03:47:07.572110 1043928 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
ok
I0730 03:47:07.574433 1043928 out.go:177]
W0730 03:47:07.576337 1043928 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0730 03:47:07.576386 1043928 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
* Suggestion: Control Plane could not update, try minikube delete --all --purge
W0730 03:47:07.576407 1043928 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
* Related issue: https://github.com/kubernetes/minikube/issues/11417
W0730 03:47:07.576413 1043928 out.go:239] *
*
W0730 03:47:07.577520 1043928 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0730 03:47:07.578733 1043928 out.go:177]
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-111858 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=docker --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect old-k8s-version-111858
helpers_test.go:235: (dbg) docker inspect old-k8s-version-111858:
-- stdout --
[
{
"Id": "a34ac8a291a520eb6d2e643a4ad398e5ef6528d9fba447b03948f61200b47ec2",
"Created": "2024-07-30T03:38:30.070492542Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 1044148,
"ExitCode": 0,
"Error": "",
"StartedAt": "2024-07-30T03:40:54.920221134Z",
"FinishedAt": "2024-07-30T03:40:53.625123795Z"
},
"Image": "sha256:2cd84ab2172023a68162f38a55db46353562cea41552fd8e8bdec97b31b2c495",
"ResolvConfPath": "/var/lib/docker/containers/a34ac8a291a520eb6d2e643a4ad398e5ef6528d9fba447b03948f61200b47ec2/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/a34ac8a291a520eb6d2e643a4ad398e5ef6528d9fba447b03948f61200b47ec2/hostname",
"HostsPath": "/var/lib/docker/containers/a34ac8a291a520eb6d2e643a4ad398e5ef6528d9fba447b03948f61200b47ec2/hosts",
"LogPath": "/var/lib/docker/containers/a34ac8a291a520eb6d2e643a4ad398e5ef6528d9fba447b03948f61200b47ec2/a34ac8a291a520eb6d2e643a4ad398e5ef6528d9fba447b03948f61200b47ec2-json.log",
"Name": "/old-k8s-version-111858",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"old-k8s-version-111858:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "old-k8s-version-111858",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/24953c5c8a91363a58e713dde45b2c64521fc81f94534453a27b214964825324-init/diff:/var/lib/docker/overlay2/a5d76e7946a29d92f7d5301e32d46b5a7eee3f238b3fc4536961a91c50aaad85/diff",
"MergedDir": "/var/lib/docker/overlay2/24953c5c8a91363a58e713dde45b2c64521fc81f94534453a27b214964825324/merged",
"UpperDir": "/var/lib/docker/overlay2/24953c5c8a91363a58e713dde45b2c64521fc81f94534453a27b214964825324/diff",
"WorkDir": "/var/lib/docker/overlay2/24953c5c8a91363a58e713dde45b2c64521fc81f94534453a27b214964825324/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "old-k8s-version-111858",
"Source": "/var/lib/docker/volumes/old-k8s-version-111858/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "old-k8s-version-111858",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "old-k8s-version-111858",
"name.minikube.sigs.k8s.io": "old-k8s-version-111858",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "cb23c233abc97bef0f1ea803c60d41b84cd042732cc359e8d3dcb429b2d6595a",
"SandboxKey": "/var/run/docker/netns/cb23c233abc9",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "38754"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "38755"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "38758"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "38756"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "38757"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"old-k8s-version-111858": {
"IPAMConfig": {
"IPv4Address": "192.168.85.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:55:02",
"DriverOpts": null,
"NetworkID": "905108a7541e2c0fc68c9c7891437ce3e31fc5781c6006425db97081265d908a",
"EndpointID": "d83e3de1179222f7e97ec185d8b15761fca9f7caa646b0647945519be9d57c2f",
"Gateway": "192.168.85.1",
"IPAddress": "192.168.85.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"old-k8s-version-111858",
"a34ac8a291a5"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-111858 -n old-k8s-version-111858
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p old-k8s-version-111858 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-111858 logs -n 25: (1.308628113s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
| ssh | -p kubenet-264426 sudo | kubenet-264426 | jenkins | v1.33.1 | 30 Jul 24 03:39 UTC | |
| | systemctl status crio --all | | | | | |
| | --full --no-pager | | | | | |
| ssh | -p kubenet-264426 sudo | kubenet-264426 | jenkins | v1.33.1 | 30 Jul 24 03:39 UTC | 30 Jul 24 03:39 UTC |
| | systemctl cat crio --no-pager | | | | | |
| ssh | -p kubenet-264426 sudo find | kubenet-264426 | jenkins | v1.33.1 | 30 Jul 24 03:39 UTC | 30 Jul 24 03:39 UTC |
| | /etc/crio -type f -exec sh -c | | | | | |
| | 'echo {}; cat {}' \; | | | | | |
| ssh | -p kubenet-264426 sudo crio | kubenet-264426 | jenkins | v1.33.1 | 30 Jul 24 03:39 UTC | 30 Jul 24 03:39 UTC |
| | config | | | | | |
| delete | -p kubenet-264426 | kubenet-264426 | jenkins | v1.33.1 | 30 Jul 24 03:39 UTC | 30 Jul 24 03:39 UTC |
| start | -p embed-certs-429785 | embed-certs-429785 | jenkins | v1.33.1 | 30 Jul 24 03:39 UTC | 30 Jul 24 03:40 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --embed-certs --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --kubernetes-version=v1.30.3 | | | | | |
| addons | enable metrics-server -p embed-certs-429785 | embed-certs-429785 | jenkins | v1.33.1 | 30 Jul 24 03:40 UTC | 30 Jul 24 03:40 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p embed-certs-429785 | embed-certs-429785 | jenkins | v1.33.1 | 30 Jul 24 03:40 UTC | 30 Jul 24 03:40 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p embed-certs-429785 | embed-certs-429785 | jenkins | v1.33.1 | 30 Jul 24 03:40 UTC | 30 Jul 24 03:40 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p embed-certs-429785 | embed-certs-429785 | jenkins | v1.33.1 | 30 Jul 24 03:40 UTC | 30 Jul 24 03:45 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --embed-certs --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --kubernetes-version=v1.30.3 | | | | | |
| addons | enable metrics-server -p old-k8s-version-111858 | old-k8s-version-111858 | jenkins | v1.33.1 | 30 Jul 24 03:40 UTC | 30 Jul 24 03:40 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p old-k8s-version-111858 | old-k8s-version-111858 | jenkins | v1.33.1 | 30 Jul 24 03:40 UTC | 30 Jul 24 03:40 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p old-k8s-version-111858 | old-k8s-version-111858 | jenkins | v1.33.1 | 30 Jul 24 03:40 UTC | 30 Jul 24 03:40 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p old-k8s-version-111858 | old-k8s-version-111858 | jenkins | v1.33.1 | 30 Jul 24 03:40 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| image | embed-certs-429785 image list | embed-certs-429785 | jenkins | v1.33.1 | 30 Jul 24 03:45 UTC | 30 Jul 24 03:45 UTC |
| | --format=json | | | | | |
| pause | -p embed-certs-429785 | embed-certs-429785 | jenkins | v1.33.1 | 30 Jul 24 03:45 UTC | 30 Jul 24 03:45 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p embed-certs-429785 | embed-certs-429785 | jenkins | v1.33.1 | 30 Jul 24 03:45 UTC | 30 Jul 24 03:45 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p embed-certs-429785 | embed-certs-429785 | jenkins | v1.33.1 | 30 Jul 24 03:45 UTC | 30 Jul 24 03:45 UTC |
| delete | -p embed-certs-429785 | embed-certs-429785 | jenkins | v1.33.1 | 30 Jul 24 03:45 UTC | 30 Jul 24 03:45 UTC |
| delete | -p | disable-driver-mounts-662987 | jenkins | v1.33.1 | 30 Jul 24 03:45 UTC | 30 Jul 24 03:45 UTC |
| | disable-driver-mounts-662987 | | | | | |
| start | -p no-preload-811518 --memory=2200 | no-preload-811518 | jenkins | v1.33.1 | 30 Jul 24 03:45 UTC | 30 Jul 24 03:46 UTC |
| | --alsologtostderr --wait=true | | | | | |
| | --preload=false --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --kubernetes-version=v1.31.0-beta.0 | | | | | |
| addons | enable metrics-server -p no-preload-811518 | no-preload-811518 | jenkins | v1.33.1 | 30 Jul 24 03:46 UTC | 30 Jul 24 03:46 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p no-preload-811518 | no-preload-811518 | jenkins | v1.33.1 | 30 Jul 24 03:46 UTC | 30 Jul 24 03:46 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p no-preload-811518 | no-preload-811518 | jenkins | v1.33.1 | 30 Jul 24 03:46 UTC | 30 Jul 24 03:46 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p no-preload-811518 --memory=2200 | no-preload-811518 | jenkins | v1.33.1 | 30 Jul 24 03:46 UTC | |
| | --alsologtostderr --wait=true | | | | | |
| | --preload=false --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --kubernetes-version=v1.31.0-beta.0 | | | | | |
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/07/30 03:46:34
Running on machine: ip-172-31-21-244
Binary: Built with gc go1.22.5 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0730 03:46:34.322019 1057854 out.go:291] Setting OutFile to fd 1 ...
I0730 03:46:34.322144 1057854 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0730 03:46:34.322152 1057854 out.go:304] Setting ErrFile to fd 2...
I0730 03:46:34.322157 1057854 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0730 03:46:34.322409 1057854 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19347-652786/.minikube/bin
I0730 03:46:34.322788 1057854 out.go:298] Setting JSON to false
I0730 03:46:34.323941 1057854 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":84539,"bootTime":1722226656,"procs":262,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
I0730 03:46:34.324102 1057854 start.go:139] virtualization:
I0730 03:46:34.327003 1057854 out.go:177] * [no-preload-811518] minikube v1.33.1 on Ubuntu 20.04 (arm64)
I0730 03:46:30.771152 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:46:32.771223 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:46:34.330337 1057854 out.go:177] - MINIKUBE_LOCATION=19347
I0730 03:46:34.330496 1057854 notify.go:220] Checking for updates...
I0730 03:46:34.334087 1057854 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0730 03:46:34.336151 1057854 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19347-652786/kubeconfig
I0730 03:46:34.337806 1057854 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19347-652786/.minikube
I0730 03:46:34.339474 1057854 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0730 03:46:34.341159 1057854 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0730 03:46:34.343630 1057854 config.go:182] Loaded profile config "no-preload-811518": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
I0730 03:46:34.344213 1057854 driver.go:392] Setting default libvirt URI to qemu:///system
I0730 03:46:34.375018 1057854 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
I0730 03:46:34.375137 1057854 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0730 03:46:34.435010 1057854 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2024-07-30 03:46:34.424702991 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214904832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
I0730 03:46:34.435125 1057854 docker.go:307] overlay module found
I0730 03:46:34.438296 1057854 out.go:177] * Using the docker driver based on existing profile
I0730 03:46:34.440143 1057854 start.go:297] selected driver: docker
I0730 03:46:34.440165 1057854 start.go:901] validating driver "docker" against &{Name:no-preload-811518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-811518 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0730 03:46:34.440300 1057854 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0730 03:46:34.440959 1057854 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0730 03:46:34.509525 1057854 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2024-07-30 03:46:34.498518825 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214904832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
I0730 03:46:34.509946 1057854 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0730 03:46:34.509987 1057854 cni.go:84] Creating CNI manager for ""
I0730 03:46:34.510005 1057854 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0730 03:46:34.510082 1057854 start.go:340] cluster config:
{Name:no-preload-811518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-811518 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0730 03:46:34.512271 1057854 out.go:177] * Starting "no-preload-811518" primary control-plane node in "no-preload-811518" cluster
I0730 03:46:34.514009 1057854 cache.go:121] Beginning downloading kic base image for docker with docker
I0730 03:46:34.515793 1057854 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
I0730 03:46:34.518120 1057854 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
I0730 03:46:34.518222 1057854 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
I0730 03:46:34.518281 1057854 profile.go:143] Saving config to /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/no-preload-811518/config.json ...
I0730 03:46:34.518588 1057854 cache.go:107] acquiring lock: {Name:mka016fd1cab22b0cddd617e5b6801c744b0166f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0730 03:46:34.518666 1057854 cache.go:115] /home/jenkins/minikube-integration/19347-652786/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
I0730 03:46:34.518679 1057854 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19347-652786/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 98.444µs
I0730 03:46:34.518687 1057854 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19347-652786/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
I0730 03:46:34.518698 1057854 cache.go:107] acquiring lock: {Name:mk58078faca6bcd89970ae400218f26d54f75be6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0730 03:46:34.518730 1057854 cache.go:115] /home/jenkins/minikube-integration/19347-652786/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
I0730 03:46:34.518744 1057854 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/home/jenkins/minikube-integration/19347-652786/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 47.606µs
I0730 03:46:34.518751 1057854 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /home/jenkins/minikube-integration/19347-652786/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
I0730 03:46:34.518768 1057854 cache.go:107] acquiring lock: {Name:mkd7d8ec478e4e8f93a9a23883ed95c15851bb73 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0730 03:46:34.518802 1057854 cache.go:115] /home/jenkins/minikube-integration/19347-652786/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
I0730 03:46:34.518808 1057854 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/home/jenkins/minikube-integration/19347-652786/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 41.788µs
I0730 03:46:34.518815 1057854 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /home/jenkins/minikube-integration/19347-652786/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
I0730 03:46:34.518832 1057854 cache.go:107] acquiring lock: {Name:mk57464505762f2d5e3ad7bb9b3afd44442b268f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0730 03:46:34.518859 1057854 cache.go:115] /home/jenkins/minikube-integration/19347-652786/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
I0730 03:46:34.518871 1057854 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/home/jenkins/minikube-integration/19347-652786/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 40.911µs
I0730 03:46:34.518877 1057854 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /home/jenkins/minikube-integration/19347-652786/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
I0730 03:46:34.518886 1057854 cache.go:107] acquiring lock: {Name:mk4a8bff3f0db7622961469a1723ec1034d85d76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0730 03:46:34.518876 1057854 cache.go:107] acquiring lock: {Name:mkb98a0988f514a2755dc65f8ac9fdeb097f120e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0730 03:46:34.518917 1057854 cache.go:115] /home/jenkins/minikube-integration/19347-652786/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
I0730 03:46:34.518923 1057854 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/home/jenkins/minikube-integration/19347-652786/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 37.653µs
I0730 03:46:34.518930 1057854 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /home/jenkins/minikube-integration/19347-652786/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
I0730 03:46:34.518944 1057854 cache.go:115] /home/jenkins/minikube-integration/19347-652786/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
I0730 03:46:34.518952 1057854 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19347-652786/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 84.069µs
I0730 03:46:34.518959 1057854 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19347-652786/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
I0730 03:46:34.518971 1057854 cache.go:107] acquiring lock: {Name:mk9f4b6ee23d443337af1936c21cfa5acd8eb45d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0730 03:46:34.518939 1057854 cache.go:107] acquiring lock: {Name:mkdc2dc56c311c5aff77d0e7427a30c2b6f09d01 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0730 03:46:34.519043 1057854 cache.go:115] /home/jenkins/minikube-integration/19347-652786/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
I0730 03:46:34.519053 1057854 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/19347-652786/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 114.617µs
I0730 03:46:34.519059 1057854 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/19347-652786/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
I0730 03:46:34.519072 1057854 cache.go:115] /home/jenkins/minikube-integration/19347-652786/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
I0730 03:46:34.519091 1057854 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/home/jenkins/minikube-integration/19347-652786/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 121.55µs
I0730 03:46:34.519099 1057854 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /home/jenkins/minikube-integration/19347-652786/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
I0730 03:46:34.519111 1057854 cache.go:87] Successfully saved all images to host disk.
W0730 03:46:34.542167 1057854 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
I0730 03:46:34.542194 1057854 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
I0730 03:46:34.542279 1057854 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
I0730 03:46:34.542304 1057854 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
I0730 03:46:34.542310 1057854 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
I0730 03:46:34.542324 1057854 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
I0730 03:46:34.542332 1057854 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
I0730 03:46:34.667070 1057854 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
I0730 03:46:34.667110 1057854 cache.go:194] Successfully downloaded all kic artifacts
I0730 03:46:34.667151 1057854 start.go:360] acquireMachinesLock for no-preload-811518: {Name:mk0126f5a47a765f730694fbf6b7e9d206d7cd39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0730 03:46:34.667222 1057854 start.go:364] duration metric: took 46.113µs to acquireMachinesLock for "no-preload-811518"
I0730 03:46:34.667259 1057854 start.go:96] Skipping create...Using existing machine configuration
I0730 03:46:34.667269 1057854 fix.go:54] fixHost starting:
I0730 03:46:34.667531 1057854 cli_runner.go:164] Run: docker container inspect no-preload-811518 --format={{.State.Status}}
I0730 03:46:34.685633 1057854 fix.go:112] recreateIfNeeded on no-preload-811518: state=Stopped err=<nil>
W0730 03:46:34.685674 1057854 fix.go:138] unexpected machine state, will restart: <nil>
I0730 03:46:34.689682 1057854 out.go:177] * Restarting existing docker container for "no-preload-811518" ...
I0730 03:46:34.691946 1057854 cli_runner.go:164] Run: docker start no-preload-811518
I0730 03:46:35.028690 1057854 cli_runner.go:164] Run: docker container inspect no-preload-811518 --format={{.State.Status}}
I0730 03:46:35.051669 1057854 kic.go:430] container "no-preload-811518" state is running.
I0730 03:46:35.053381 1057854 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-811518
I0730 03:46:35.083741 1057854 profile.go:143] Saving config to /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/no-preload-811518/config.json ...
I0730 03:46:35.083984 1057854 machine.go:94] provisionDockerMachine start ...
I0730 03:46:35.084048 1057854 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-811518
I0730 03:46:35.107299 1057854 main.go:141] libmachine: Using SSH client type: native
I0730 03:46:35.107573 1057854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil> [] 0s} 127.0.0.1 38764 <nil> <nil>}
I0730 03:46:35.107583 1057854 main.go:141] libmachine: About to run SSH command:
hostname
I0730 03:46:35.108404 1057854 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I0730 03:46:38.254164 1057854 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-811518
I0730 03:46:38.254190 1057854 ubuntu.go:169] provisioning hostname "no-preload-811518"
I0730 03:46:38.254256 1057854 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-811518
I0730 03:46:38.277496 1057854 main.go:141] libmachine: Using SSH client type: native
I0730 03:46:38.277823 1057854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil> [] 0s} 127.0.0.1 38764 <nil> <nil>}
I0730 03:46:38.277843 1057854 main.go:141] libmachine: About to run SSH command:
sudo hostname no-preload-811518 && echo "no-preload-811518" | sudo tee /etc/hostname
I0730 03:46:38.430822 1057854 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-811518
I0730 03:46:38.430913 1057854 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-811518
I0730 03:46:38.448172 1057854 main.go:141] libmachine: Using SSH client type: native
I0730 03:46:38.448430 1057854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil> [] 0s} 127.0.0.1 38764 <nil> <nil>}
I0730 03:46:38.448455 1057854 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sno-preload-811518' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-811518/g' /etc/hosts;
else
echo '127.0.1.1 no-preload-811518' | sudo tee -a /etc/hosts;
fi
fi
I0730 03:46:38.586061 1057854 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0730 03:46:38.586090 1057854 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19347-652786/.minikube CaCertPath:/home/jenkins/minikube-integration/19347-652786/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19347-652786/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19347-652786/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19347-652786/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19347-652786/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19347-652786/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19347-652786/.minikube}
I0730 03:46:38.586120 1057854 ubuntu.go:177] setting up certificates
I0730 03:46:38.586139 1057854 provision.go:84] configureAuth start
I0730 03:46:38.586209 1057854 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-811518
I0730 03:46:38.603119 1057854 provision.go:143] copyHostCerts
I0730 03:46:38.603196 1057854 exec_runner.go:144] found /home/jenkins/minikube-integration/19347-652786/.minikube/ca.pem, removing ...
I0730 03:46:38.603208 1057854 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19347-652786/.minikube/ca.pem
I0730 03:46:38.603288 1057854 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19347-652786/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19347-652786/.minikube/ca.pem (1082 bytes)
I0730 03:46:38.603392 1057854 exec_runner.go:144] found /home/jenkins/minikube-integration/19347-652786/.minikube/cert.pem, removing ...
I0730 03:46:38.603405 1057854 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19347-652786/.minikube/cert.pem
I0730 03:46:38.603437 1057854 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19347-652786/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19347-652786/.minikube/cert.pem (1123 bytes)
I0730 03:46:38.603492 1057854 exec_runner.go:144] found /home/jenkins/minikube-integration/19347-652786/.minikube/key.pem, removing ...
I0730 03:46:38.603502 1057854 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19347-652786/.minikube/key.pem
I0730 03:46:38.603525 1057854 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19347-652786/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19347-652786/.minikube/key.pem (1679 bytes)
I0730 03:46:38.603586 1057854 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19347-652786/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19347-652786/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19347-652786/.minikube/certs/ca-key.pem org=jenkins.no-preload-811518 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-811518]
I0730 03:46:34.772803 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:46:37.270724 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:46:39.275136 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:46:39.420793 1057854 provision.go:177] copyRemoteCerts
I0730 03:46:39.420862 1057854 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0730 03:46:39.420905 1057854 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-811518
I0730 03:46:39.442963 1057854 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38764 SSHKeyPath:/home/jenkins/minikube-integration/19347-652786/.minikube/machines/no-preload-811518/id_rsa Username:docker}
I0730 03:46:39.542934 1057854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-652786/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0730 03:46:39.570652 1057854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-652786/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I0730 03:46:39.596938 1057854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-652786/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0730 03:46:39.624306 1057854 provision.go:87] duration metric: took 1.0381494s to configureAuth
I0730 03:46:39.624335 1057854 ubuntu.go:193] setting minikube options for container-runtime
I0730 03:46:39.624547 1057854 config.go:182] Loaded profile config "no-preload-811518": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
I0730 03:46:39.624617 1057854 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-811518
I0730 03:46:39.641397 1057854 main.go:141] libmachine: Using SSH client type: native
I0730 03:46:39.641796 1057854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil> [] 0s} 127.0.0.1 38764 <nil> <nil>}
I0730 03:46:39.641813 1057854 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0730 03:46:39.775614 1057854 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0730 03:46:39.775641 1057854 ubuntu.go:71] root file system type: overlay
I0730 03:46:39.775761 1057854 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0730 03:46:39.775837 1057854 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-811518
I0730 03:46:39.794043 1057854 main.go:141] libmachine: Using SSH client type: native
I0730 03:46:39.794318 1057854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil> [] 0s} 127.0.0.1 38764 <nil> <nil>}
I0730 03:46:39.794405 1057854 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0730 03:46:39.942197 1057854 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0730 03:46:39.942311 1057854 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-811518
I0730 03:46:39.962486 1057854 main.go:141] libmachine: Using SSH client type: native
I0730 03:46:39.962747 1057854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil> [] 0s} 127.0.0.1 38764 <nil> <nil>}
I0730 03:46:39.962770 1057854 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0730 03:46:40.136840 1057854 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0730 03:46:40.136939 1057854 machine.go:97] duration metric: took 5.05294317s to provisionDockerMachine
I0730 03:46:40.136966 1057854 start.go:293] postStartSetup for "no-preload-811518" (driver="docker")
I0730 03:46:40.137010 1057854 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0730 03:46:40.137108 1057854 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0730 03:46:40.137182 1057854 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-811518
I0730 03:46:40.156172 1057854 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38764 SSHKeyPath:/home/jenkins/minikube-integration/19347-652786/.minikube/machines/no-preload-811518/id_rsa Username:docker}
I0730 03:46:40.263047 1057854 ssh_runner.go:195] Run: cat /etc/os-release
I0730 03:46:40.269096 1057854 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0730 03:46:40.269140 1057854 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0730 03:46:40.269153 1057854 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0730 03:46:40.269163 1057854 info.go:137] Remote host: Ubuntu 22.04.4 LTS
I0730 03:46:40.269180 1057854 filesync.go:126] Scanning /home/jenkins/minikube-integration/19347-652786/.minikube/addons for local assets ...
I0730 03:46:40.269241 1057854 filesync.go:126] Scanning /home/jenkins/minikube-integration/19347-652786/.minikube/files for local assets ...
I0730 03:46:40.269347 1057854 filesync.go:149] local asset: /home/jenkins/minikube-integration/19347-652786/.minikube/files/etc/ssl/certs/6581782.pem -> 6581782.pem in /etc/ssl/certs
I0730 03:46:40.269464 1057854 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0730 03:46:40.281067 1057854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-652786/.minikube/files/etc/ssl/certs/6581782.pem --> /etc/ssl/certs/6581782.pem (1708 bytes)
I0730 03:46:40.307908 1057854 start.go:296] duration metric: took 170.898223ms for postStartSetup
I0730 03:46:40.308044 1057854 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0730 03:46:40.308093 1057854 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-811518
I0730 03:46:40.334057 1057854 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38764 SSHKeyPath:/home/jenkins/minikube-integration/19347-652786/.minikube/machines/no-preload-811518/id_rsa Username:docker}
I0730 03:46:40.426501 1057854 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0730 03:46:40.431253 1057854 fix.go:56] duration metric: took 5.763975317s for fixHost
I0730 03:46:40.431278 1057854 start.go:83] releasing machines lock for "no-preload-811518", held for 5.764043631s
I0730 03:46:40.431350 1057854 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-811518
I0730 03:46:40.448562 1057854 ssh_runner.go:195] Run: cat /version.json
I0730 03:46:40.448636 1057854 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-811518
I0730 03:46:40.448917 1057854 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0730 03:46:40.448986 1057854 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-811518
I0730 03:46:40.478120 1057854 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38764 SSHKeyPath:/home/jenkins/minikube-integration/19347-652786/.minikube/machines/no-preload-811518/id_rsa Username:docker}
I0730 03:46:40.490528 1057854 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38764 SSHKeyPath:/home/jenkins/minikube-integration/19347-652786/.minikube/machines/no-preload-811518/id_rsa Username:docker}
I0730 03:46:40.577334 1057854 ssh_runner.go:195] Run: systemctl --version
I0730 03:46:40.729050 1057854 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0730 03:46:40.733830 1057854 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0730 03:46:40.753752 1057854 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0730 03:46:40.753894 1057854 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0730 03:46:40.764124 1057854 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0730 03:46:40.764159 1057854 start.go:495] detecting cgroup driver to use...
I0730 03:46:40.764197 1057854 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0730 03:46:40.764296 1057854 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0730 03:46:40.783003 1057854 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0730 03:46:40.793529 1057854 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0730 03:46:40.810880 1057854 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0730 03:46:40.811003 1057854 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0730 03:46:40.822097 1057854 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0730 03:46:40.833078 1057854 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0730 03:46:40.844096 1057854 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0730 03:46:40.855061 1057854 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0730 03:46:40.869987 1057854 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0730 03:46:40.880973 1057854 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0730 03:46:40.891589 1057854 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0730 03:46:40.903256 1057854 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0730 03:46:40.912368 1057854 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0730 03:46:40.926139 1057854 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0730 03:46:41.063008 1057854 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0730 03:46:41.181347 1057854 start.go:495] detecting cgroup driver to use...
I0730 03:46:41.181445 1057854 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0730 03:46:41.181529 1057854 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0730 03:46:41.200659 1057854 cruntime.go:279] skipping containerd shutdown because we are bound to it
I0730 03:46:41.200780 1057854 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0730 03:46:41.221344 1057854 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0730 03:46:41.245388 1057854 ssh_runner.go:195] Run: which cri-dockerd
I0730 03:46:41.249562 1057854 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0730 03:46:41.260670 1057854 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0730 03:46:41.298457 1057854 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0730 03:46:41.417189 1057854 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0730 03:46:41.542670 1057854 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0730 03:46:41.542904 1057854 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0730 03:46:41.568616 1057854 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0730 03:46:41.674885 1057854 ssh_runner.go:195] Run: sudo systemctl restart docker
I0730 03:46:42.129019 1057854 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0730 03:46:42.144416 1057854 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
I0730 03:46:42.161566 1057854 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0730 03:46:42.177165 1057854 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0730 03:46:42.288913 1057854 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0730 03:46:42.402858 1057854 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0730 03:46:42.505734 1057854 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0730 03:46:42.521889 1057854 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0730 03:46:42.537039 1057854 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0730 03:46:42.637083 1057854 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0730 03:46:42.727798 1057854 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0730 03:46:42.727915 1057854 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0730 03:46:42.732556 1057854 start.go:563] Will wait 60s for crictl version
I0730 03:46:42.732637 1057854 ssh_runner.go:195] Run: which crictl
I0730 03:46:42.736679 1057854 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0730 03:46:42.786341 1057854 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.1.1
RuntimeApiVersion: v1
I0730 03:46:42.786425 1057854 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0730 03:46:42.821235 1057854 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0730 03:46:42.847551 1057854 out.go:204] * Preparing Kubernetes v1.31.0-beta.0 on Docker 27.1.1 ...
I0730 03:46:42.847642 1057854 cli_runner.go:164] Run: docker network inspect no-preload-811518 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0730 03:46:42.863606 1057854 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I0730 03:46:42.867187 1057854 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0730 03:46:42.885071 1057854 kubeadm.go:883] updating cluster {Name:no-preload-811518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-811518 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0730 03:46:42.885194 1057854 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
I0730 03:46:42.885243 1057854 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0730 03:46:42.903398 1057854 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.31.0-beta.0
registry.k8s.io/kube-proxy:v1.31.0-beta.0
registry.k8s.io/kube-scheduler:v1.31.0-beta.0
registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
registry.k8s.io/etcd:3.5.14-0
registry.k8s.io/pause:3.10
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
-- /stdout --
I0730 03:46:42.903425 1057854 cache_images.go:84] Images are preloaded, skipping loading
I0730 03:46:42.903436 1057854 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.31.0-beta.0 docker true true} ...
I0730 03:46:42.903559 1057854 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-811518 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-811518 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0730 03:46:42.903632 1057854 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0730 03:46:42.952335 1057854 cni.go:84] Creating CNI manager for ""
I0730 03:46:42.952362 1057854 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0730 03:46:42.952374 1057854 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0730 03:46:42.952394 1057854 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-811518 NodeName:no-preload-811518 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0730 03:46:42.952535 1057854 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "no-preload-811518"
kubeletExtraArgs:
node-ip: 192.168.76.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.31.0-beta.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0730 03:46:42.952604 1057854 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
I0730 03:46:42.964395 1057854 binaries.go:44] Found k8s binaries, skipping transfer
I0730 03:46:42.964469 1057854 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0730 03:46:42.973660 1057854 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
I0730 03:46:42.992954 1057854 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
I0730 03:46:43.016828 1057854 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
I0730 03:46:43.036458 1057854 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I0730 03:46:43.040339 1057854 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0730 03:46:43.056277 1057854 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0730 03:46:43.158740 1057854 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0730 03:46:43.175172 1057854 certs.go:68] Setting up /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/no-preload-811518 for IP: 192.168.76.2
I0730 03:46:43.175194 1057854 certs.go:194] generating shared ca certs ...
I0730 03:46:43.175211 1057854 certs.go:226] acquiring lock for ca certs: {Name:mkd5662d6a9243b34d7b6c08a80c493f8c01d7c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0730 03:46:43.175368 1057854 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19347-652786/.minikube/ca.key
I0730 03:46:43.175417 1057854 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19347-652786/.minikube/proxy-client-ca.key
I0730 03:46:43.175432 1057854 certs.go:256] generating profile certs ...
I0730 03:46:43.175527 1057854 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/no-preload-811518/client.key
I0730 03:46:43.175597 1057854 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/no-preload-811518/apiserver.key.7a15ee4a
I0730 03:46:43.175648 1057854 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/no-preload-811518/proxy-client.key
I0730 03:46:43.175762 1057854 certs.go:484] found cert: /home/jenkins/minikube-integration/19347-652786/.minikube/certs/658178.pem (1338 bytes)
W0730 03:46:43.175796 1057854 certs.go:480] ignoring /home/jenkins/minikube-integration/19347-652786/.minikube/certs/658178_empty.pem, impossibly tiny 0 bytes
I0730 03:46:43.175807 1057854 certs.go:484] found cert: /home/jenkins/minikube-integration/19347-652786/.minikube/certs/ca-key.pem (1675 bytes)
I0730 03:46:43.175832 1057854 certs.go:484] found cert: /home/jenkins/minikube-integration/19347-652786/.minikube/certs/ca.pem (1082 bytes)
I0730 03:46:43.175859 1057854 certs.go:484] found cert: /home/jenkins/minikube-integration/19347-652786/.minikube/certs/cert.pem (1123 bytes)
I0730 03:46:43.175883 1057854 certs.go:484] found cert: /home/jenkins/minikube-integration/19347-652786/.minikube/certs/key.pem (1679 bytes)
I0730 03:46:43.175930 1057854 certs.go:484] found cert: /home/jenkins/minikube-integration/19347-652786/.minikube/files/etc/ssl/certs/6581782.pem (1708 bytes)
I0730 03:46:43.176548 1057854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-652786/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0730 03:46:43.207727 1057854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-652786/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0730 03:46:43.235048 1057854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-652786/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0730 03:46:43.261392 1057854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-652786/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0730 03:46:43.293656 1057854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/no-preload-811518/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I0730 03:46:43.354065 1057854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/no-preload-811518/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0730 03:46:43.389080 1057854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/no-preload-811518/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0730 03:46:43.429134 1057854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-652786/.minikube/profiles/no-preload-811518/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0730 03:46:43.472291 1057854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-652786/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0730 03:46:43.502695 1057854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-652786/.minikube/certs/658178.pem --> /usr/share/ca-certificates/658178.pem (1338 bytes)
I0730 03:46:43.528630 1057854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19347-652786/.minikube/files/etc/ssl/certs/6581782.pem --> /usr/share/ca-certificates/6581782.pem (1708 bytes)
I0730 03:46:43.555989 1057854 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0730 03:46:43.576612 1057854 ssh_runner.go:195] Run: openssl version
I0730 03:46:43.582890 1057854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0730 03:46:43.593474 1057854 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0730 03:46:43.597269 1057854 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 30 02:36 /usr/share/ca-certificates/minikubeCA.pem
I0730 03:46:43.597335 1057854 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0730 03:46:43.604519 1057854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0730 03:46:43.614113 1057854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/658178.pem && ln -fs /usr/share/ca-certificates/658178.pem /etc/ssl/certs/658178.pem"
I0730 03:46:43.624969 1057854 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/658178.pem
I0730 03:46:43.628653 1057854 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 30 02:43 /usr/share/ca-certificates/658178.pem
I0730 03:46:43.628729 1057854 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/658178.pem
I0730 03:46:43.636874 1057854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/658178.pem /etc/ssl/certs/51391683.0"
I0730 03:46:43.646551 1057854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6581782.pem && ln -fs /usr/share/ca-certificates/6581782.pem /etc/ssl/certs/6581782.pem"
I0730 03:46:43.656265 1057854 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6581782.pem
I0730 03:46:43.659973 1057854 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 30 02:43 /usr/share/ca-certificates/6581782.pem
I0730 03:46:43.660091 1057854 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6581782.pem
I0730 03:46:43.667673 1057854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6581782.pem /etc/ssl/certs/3ec20f2e.0"
I0730 03:46:43.679297 1057854 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0730 03:46:43.684432 1057854 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0730 03:46:43.693526 1057854 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0730 03:46:43.700862 1057854 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0730 03:46:43.708255 1057854 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0730 03:46:43.715598 1057854 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0730 03:46:43.722651 1057854 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0730 03:46:43.730655 1057854 kubeadm.go:392] StartCluster: {Name:no-preload-811518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-811518 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/j
enkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0730 03:46:43.730881 1057854 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0730 03:46:43.748520 1057854 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0730 03:46:43.757558 1057854 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0730 03:46:43.757610 1057854 kubeadm.go:593] restartPrimaryControlPlane start ...
I0730 03:46:43.757663 1057854 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0730 03:46:43.770652 1057854 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0730 03:46:43.772013 1057854 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-811518" does not appear in /home/jenkins/minikube-integration/19347-652786/kubeconfig
I0730 03:46:43.772286 1057854 kubeconfig.go:62] /home/jenkins/minikube-integration/19347-652786/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-811518" cluster setting kubeconfig missing "no-preload-811518" context setting]
I0730 03:46:43.772712 1057854 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19347-652786/kubeconfig: {Name:mk305a6aba596aa7115323de6e57c59ca62a0dbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0730 03:46:43.774415 1057854 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0730 03:46:43.784054 1057854 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
I0730 03:46:43.784096 1057854 kubeadm.go:597] duration metric: took 26.473732ms to restartPrimaryControlPlane
I0730 03:46:43.784107 1057854 kubeadm.go:394] duration metric: took 53.462636ms to StartCluster
I0730 03:46:43.784124 1057854 settings.go:142] acquiring lock: {Name:mk99a6c16e82d0c6a2db8fb43f237845d315971d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0730 03:46:43.784189 1057854 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19347-652786/kubeconfig
I0730 03:46:43.785137 1057854 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19347-652786/kubeconfig: {Name:mk305a6aba596aa7115323de6e57c59ca62a0dbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0730 03:46:43.785353 1057854 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I0730 03:46:43.785802 1057854 config.go:182] Loaded profile config "no-preload-811518": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
I0730 03:46:43.785870 1057854 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0730 03:46:43.786003 1057854 addons.go:69] Setting storage-provisioner=true in profile "no-preload-811518"
I0730 03:46:43.786041 1057854 addons.go:234] Setting addon storage-provisioner=true in "no-preload-811518"
W0730 03:46:43.786053 1057854 addons.go:243] addon storage-provisioner should already be in state true
I0730 03:46:43.786044 1057854 addons.go:69] Setting default-storageclass=true in profile "no-preload-811518"
I0730 03:46:43.786081 1057854 host.go:66] Checking if "no-preload-811518" exists ...
I0730 03:46:43.786153 1057854 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-811518"
I0730 03:46:43.786483 1057854 cli_runner.go:164] Run: docker container inspect no-preload-811518 --format={{.State.Status}}
I0730 03:46:43.786509 1057854 addons.go:69] Setting dashboard=true in profile "no-preload-811518"
I0730 03:46:43.786536 1057854 addons.go:234] Setting addon dashboard=true in "no-preload-811518"
W0730 03:46:43.786544 1057854 addons.go:243] addon dashboard should already be in state true
I0730 03:46:43.786564 1057854 host.go:66] Checking if "no-preload-811518" exists ...
I0730 03:46:43.786913 1057854 cli_runner.go:164] Run: docker container inspect no-preload-811518 --format={{.State.Status}}
I0730 03:46:43.786498 1057854 cli_runner.go:164] Run: docker container inspect no-preload-811518 --format={{.State.Status}}
I0730 03:46:43.786504 1057854 addons.go:69] Setting metrics-server=true in profile "no-preload-811518"
I0730 03:46:43.790182 1057854 addons.go:234] Setting addon metrics-server=true in "no-preload-811518"
W0730 03:46:43.790396 1057854 addons.go:243] addon metrics-server should already be in state true
I0730 03:46:43.790439 1057854 host.go:66] Checking if "no-preload-811518" exists ...
I0730 03:46:43.790381 1057854 out.go:177] * Verifying Kubernetes components...
I0730 03:46:43.790856 1057854 cli_runner.go:164] Run: docker container inspect no-preload-811518 --format={{.State.Status}}
I0730 03:46:43.792942 1057854 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0730 03:46:43.837089 1057854 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0730 03:46:43.841695 1057854 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0730 03:46:43.842593 1057854 addons.go:234] Setting addon default-storageclass=true in "no-preload-811518"
W0730 03:46:43.842609 1057854 addons.go:243] addon default-storageclass should already be in state true
I0730 03:46:43.842635 1057854 host.go:66] Checking if "no-preload-811518" exists ...
I0730 03:46:43.843055 1057854 cli_runner.go:164] Run: docker container inspect no-preload-811518 --format={{.State.Status}}
I0730 03:46:43.844568 1057854 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0730 03:46:43.844588 1057854 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0730 03:46:43.844658 1057854 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-811518
I0730 03:46:43.883671 1057854 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0730 03:46:43.885783 1057854 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0730 03:46:43.885807 1057854 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0730 03:46:43.885876 1057854 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-811518
I0730 03:46:43.894706 1057854 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0730 03:46:43.897653 1057854 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0730 03:46:43.897688 1057854 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0730 03:46:43.897760 1057854 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-811518
I0730 03:46:43.912768 1057854 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38764 SSHKeyPath:/home/jenkins/minikube-integration/19347-652786/.minikube/machines/no-preload-811518/id_rsa Username:docker}
I0730 03:46:43.938828 1057854 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0730 03:46:43.938851 1057854 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0730 03:46:43.938918 1057854 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-811518
I0730 03:46:43.965196 1057854 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38764 SSHKeyPath:/home/jenkins/minikube-integration/19347-652786/.minikube/machines/no-preload-811518/id_rsa Username:docker}
I0730 03:46:43.973404 1057854 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38764 SSHKeyPath:/home/jenkins/minikube-integration/19347-652786/.minikube/machines/no-preload-811518/id_rsa Username:docker}
I0730 03:46:43.991263 1057854 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38764 SSHKeyPath:/home/jenkins/minikube-integration/19347-652786/.minikube/machines/no-preload-811518/id_rsa Username:docker}
I0730 03:46:44.004504 1057854 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0730 03:46:44.101758 1057854 node_ready.go:35] waiting up to 6m0s for node "no-preload-811518" to be "Ready" ...
I0730 03:46:44.142524 1057854 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0730 03:46:44.142546 1057854 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0730 03:46:44.186379 1057854 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0730 03:46:44.186449 1057854 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0730 03:46:44.275785 1057854 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0730 03:46:44.283000 1057854 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0730 03:46:44.283074 1057854 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0730 03:46:44.310904 1057854 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0730 03:46:41.771485 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:46:44.286631 1043928 pod_ready.go:102] pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace has status "Ready":"False"
I0730 03:46:44.286659 1043928 pod_ready.go:81] duration metric: took 4m0.022189423s for pod "metrics-server-9975d5f86-42nxj" in "kube-system" namespace to be "Ready" ...
E0730 03:46:44.286669 1043928 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
I0730 03:46:44.286677 1043928 pod_ready.go:38] duration metric: took 5m26.009343714s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0730 03:46:44.286695 1043928 api_server.go:52] waiting for apiserver process to appear ...
I0730 03:46:44.286767 1043928 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0730 03:46:44.316796 1043928 logs.go:276] 2 containers: [d68a1084ef5a d1dbee0d1be9]
I0730 03:46:44.316916 1043928 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0730 03:46:44.379497 1057854 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0730 03:46:44.379580 1057854 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0730 03:46:44.648957 1057854 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0730 03:46:44.649031 1057854 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0730 03:46:44.687363 1057854 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0730 03:46:44.687392 1057854 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0730 03:46:45.128585 1057854 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
I0730 03:46:45.128637 1057854 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0730 03:46:45.279304 1057854 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0730 03:46:45.279344 1057854 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0730 03:46:45.285097 1057854 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.0092253s)
W0730 03:46:45.285146 1057854 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0730 03:46:45.285171 1057854 retry.go:31] will retry after 185.643089ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0730 03:46:45.471563 1057854 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0730 03:46:45.501316 1057854 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.190327971s)
W0730 03:46:45.501355 1057854 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0730 03:46:45.501376 1057854 retry.go:31] will retry after 358.545438ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I0730 03:46:45.642539 1057854 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0730 03:46:45.642573 1057854 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0730 03:46:45.722644 1057854 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0730 03:46:45.779576 1057854 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0730 03:46:45.779619 1057854 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0730 03:46:45.860810 1057854 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0730 03:46:45.933056 1057854 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0730 03:46:45.933084 1057854 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0730 03:46:46.216306 1057854 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0730 03:46:46.216335 1057854 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0730 03:46:46.349778 1057854 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0730 03:46:44.346172 1043928 logs.go:276] 2 containers: [ec0cdba2249f 7015c3abc9b9]
I0730 03:46:44.346258 1043928 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0730 03:46:44.382340 1043928 logs.go:276] 2 containers: [2cd9c807cd34 b8b5a5f5b2cd]
I0730 03:46:44.382416 1043928 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0730 03:46:44.408911 1043928 logs.go:276] 2 containers: [ad56e41faf1a 15a7c60d1f7b]
I0730 03:46:44.408986 1043928 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0730 03:46:44.443353 1043928 logs.go:276] 2 containers: [5d46f562a227 0d145599f470]
I0730 03:46:44.443431 1043928 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0730 03:46:44.494693 1043928 logs.go:276] 2 containers: [27b9c281f645 81dd56fb259d]
I0730 03:46:44.494837 1043928 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0730 03:46:44.523644 1043928 logs.go:276] 0 containers: []
W0730 03:46:44.528436 1043928 logs.go:278] No container was found matching "kindnet"
I0730 03:46:44.528527 1043928 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0730 03:46:44.563892 1043928 logs.go:276] 1 containers: [99de9be3f2f4]
I0730 03:46:44.564007 1043928 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0730 03:46:44.594823 1043928 logs.go:276] 2 containers: [0ab09fb9a8c8 5805a5344565]
I0730 03:46:44.594867 1043928 logs.go:123] Gathering logs for kube-scheduler [15a7c60d1f7b] ...
I0730 03:46:44.594898 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a7c60d1f7b"
I0730 03:46:44.649215 1043928 logs.go:123] Gathering logs for kube-proxy [0d145599f470] ...
I0730 03:46:44.649279 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d145599f470"
I0730 03:46:44.684539 1043928 logs.go:123] Gathering logs for storage-provisioner [5805a5344565] ...
I0730 03:46:44.684620 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5805a5344565"
I0730 03:46:44.718639 1043928 logs.go:123] Gathering logs for dmesg ...
I0730 03:46:44.718714 1043928 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0730 03:46:44.749530 1043928 logs.go:123] Gathering logs for kube-apiserver [d1dbee0d1be9] ...
I0730 03:46:44.749719 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1dbee0d1be9"
I0730 03:46:44.873389 1043928 logs.go:123] Gathering logs for coredns [b8b5a5f5b2cd] ...
I0730 03:46:44.873472 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b5a5f5b2cd"
I0730 03:46:44.926764 1043928 logs.go:123] Gathering logs for kube-scheduler [ad56e41faf1a] ...
I0730 03:46:44.926844 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad56e41faf1a"
I0730 03:46:44.957477 1043928 logs.go:123] Gathering logs for kubernetes-dashboard [99de9be3f2f4] ...
I0730 03:46:44.957508 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99de9be3f2f4"
I0730 03:46:44.999678 1043928 logs.go:123] Gathering logs for kube-apiserver [d68a1084ef5a] ...
I0730 03:46:44.999709 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68a1084ef5a"
I0730 03:46:45.090365 1043928 logs.go:123] Gathering logs for etcd [7015c3abc9b9] ...
I0730 03:46:45.090424 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7015c3abc9b9"
I0730 03:46:45.159871 1043928 logs.go:123] Gathering logs for coredns [2cd9c807cd34] ...
I0730 03:46:45.159910 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cd9c807cd34"
I0730 03:46:45.191655 1043928 logs.go:123] Gathering logs for kube-controller-manager [81dd56fb259d] ...
I0730 03:46:45.191690 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81dd56fb259d"
I0730 03:46:45.295298 1043928 logs.go:123] Gathering logs for kubelet ...
I0730 03:46:45.295337 1043928 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0730 03:46:45.392951 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:18 old-k8s-version-111858 kubelet[1359]: E0730 03:41:18.315016 1359 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-111858" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-111858' and this object
W0730 03:46:45.393191 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:18 old-k8s-version-111858 kubelet[1359]: E0730 03:41:18.315138 1359 reflector.go:138] object-"kube-system"/"kube-proxy-token-jp96h": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-jp96h" is forbidden: User "system:node:old-k8s-version-111858" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-111858' and this object
W0730 03:46:45.400189 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:20 old-k8s-version-111858 kubelet[1359]: E0730 03:41:20.353061 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0730 03:46:45.401009 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:20 old-k8s-version-111858 kubelet[1359]: E0730 03:41:20.398187 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.401532 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:21 old-k8s-version-111858 kubelet[1359]: E0730 03:41:21.431634 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.408332 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:34 old-k8s-version-111858 kubelet[1359]: E0730 03:41:34.817700 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0730 03:46:45.408686 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:36 old-k8s-version-111858 kubelet[1359]: E0730 03:41:36.897125 1359 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-rrwkc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-rrwkc" is forbidden: User "system:node:old-k8s-version-111858" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-111858' and this object
W0730 03:46:45.413078 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:44 old-k8s-version-111858 kubelet[1359]: E0730 03:41:44.313565 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0730 03:46:45.413461 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:44 old-k8s-version-111858 kubelet[1359]: E0730 03:41:44.681668 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.413655 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:46 old-k8s-version-111858 kubelet[1359]: E0730 03:41:46.797782 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.414313 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:51 old-k8s-version-111858 kubelet[1359]: E0730 03:41:51.772734 1359 pod_workers.go:191] Error syncing pod 61977ceb-fabc-4963-9a9a-a69ce9b13905 ("storage-provisioner_kube-system(61977ceb-fabc-4963-9a9a-a69ce9b13905)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(61977ceb-fabc-4963-9a9a-a69ce9b13905)"
W0730 03:46:45.421683 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:58 old-k8s-version-111858 kubelet[1359]: E0730 03:41:58.482463 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0730 03:46:45.423804 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:58 old-k8s-version-111858 kubelet[1359]: E0730 03:41:58.509101 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0730 03:46:45.424136 1043928 logs.go:138] Found kubelet problem: Jul 30 03:42:09 old-k8s-version-111858 kubelet[1359]: E0730 03:42:09.796542 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.424327 1043928 logs.go:138] Found kubelet problem: Jul 30 03:42:11 old-k8s-version-111858 kubelet[1359]: E0730 03:42:11.796425 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.426607 1043928 logs.go:138] Found kubelet problem: Jul 30 03:42:24 old-k8s-version-111858 kubelet[1359]: E0730 03:42:24.376186 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0730 03:46:45.426796 1043928 logs.go:138] Found kubelet problem: Jul 30 03:42:24 old-k8s-version-111858 kubelet[1359]: E0730 03:42:24.804673 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.426982 1043928 logs.go:138] Found kubelet problem: Jul 30 03:42:35 old-k8s-version-111858 kubelet[1359]: E0730 03:42:35.796278 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.427179 1043928 logs.go:138] Found kubelet problem: Jul 30 03:42:36 old-k8s-version-111858 kubelet[1359]: E0730 03:42:36.797986 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.429273 1043928 logs.go:138] Found kubelet problem: Jul 30 03:42:50 old-k8s-version-111858 kubelet[1359]: E0730 03:42:50.833614 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0730 03:46:45.429471 1043928 logs.go:138] Found kubelet problem: Jul 30 03:42:51 old-k8s-version-111858 kubelet[1359]: E0730 03:42:51.803823 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.430812 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:02 old-k8s-version-111858 kubelet[1359]: E0730 03:43:02.798080 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.431024 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:04 old-k8s-version-111858 kubelet[1359]: E0730 03:43:04.796877 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.431220 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:15 old-k8s-version-111858 kubelet[1359]: E0730 03:43:15.797776 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.433472 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:16 old-k8s-version-111858 kubelet[1359]: E0730 03:43:16.367003 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0730 03:46:45.433691 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:27 old-k8s-version-111858 kubelet[1359]: E0730 03:43:27.796546 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.433889 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:31 old-k8s-version-111858 kubelet[1359]: E0730 03:43:31.796344 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.434082 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:41 old-k8s-version-111858 kubelet[1359]: E0730 03:43:41.796601 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.434281 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:46 old-k8s-version-111858 kubelet[1359]: E0730 03:43:46.800531 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.434465 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:53 old-k8s-version-111858 kubelet[1359]: E0730 03:43:53.796486 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.434661 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:57 old-k8s-version-111858 kubelet[1359]: E0730 03:43:57.804900 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.434845 1043928 logs.go:138] Found kubelet problem: Jul 30 03:44:08 old-k8s-version-111858 kubelet[1359]: E0730 03:44:08.799152 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.435043 1043928 logs.go:138] Found kubelet problem: Jul 30 03:44:09 old-k8s-version-111858 kubelet[1359]: E0730 03:44:09.796891 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.439098 1043928 logs.go:138] Found kubelet problem: Jul 30 03:44:19 old-k8s-version-111858 kubelet[1359]: E0730 03:44:19.811955 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0730 03:46:45.439329 1043928 logs.go:138] Found kubelet problem: Jul 30 03:44:20 old-k8s-version-111858 kubelet[1359]: E0730 03:44:20.796408 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.439533 1043928 logs.go:138] Found kubelet problem: Jul 30 03:44:31 old-k8s-version-111858 kubelet[1359]: E0730 03:44:31.796408 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.439718 1043928 logs.go:138] Found kubelet problem: Jul 30 03:44:32 old-k8s-version-111858 kubelet[1359]: E0730 03:44:32.804546 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.441957 1043928 logs.go:138] Found kubelet problem: Jul 30 03:44:46 old-k8s-version-111858 kubelet[1359]: E0730 03:44:46.480773 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0730 03:46:45.442166 1043928 logs.go:138] Found kubelet problem: Jul 30 03:44:47 old-k8s-version-111858 kubelet[1359]: E0730 03:44:47.796662 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.442374 1043928 logs.go:138] Found kubelet problem: Jul 30 03:44:57 old-k8s-version-111858 kubelet[1359]: E0730 03:44:57.796558 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.442559 1043928 logs.go:138] Found kubelet problem: Jul 30 03:45:01 old-k8s-version-111858 kubelet[1359]: E0730 03:45:01.796960 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.442755 1043928 logs.go:138] Found kubelet problem: Jul 30 03:45:08 old-k8s-version-111858 kubelet[1359]: E0730 03:45:08.804919 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.442941 1043928 logs.go:138] Found kubelet problem: Jul 30 03:45:12 old-k8s-version-111858 kubelet[1359]: E0730 03:45:12.797499 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.443138 1043928 logs.go:138] Found kubelet problem: Jul 30 03:45:23 old-k8s-version-111858 kubelet[1359]: E0730 03:45:23.796294 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.443332 1043928 logs.go:138] Found kubelet problem: Jul 30 03:45:23 old-k8s-version-111858 kubelet[1359]: E0730 03:45:23.801724 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.444699 1043928 logs.go:138] Found kubelet problem: Jul 30 03:45:35 old-k8s-version-111858 kubelet[1359]: E0730 03:45:35.796294 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.444898 1043928 logs.go:138] Found kubelet problem: Jul 30 03:45:35 old-k8s-version-111858 kubelet[1359]: E0730 03:45:35.797085 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.445097 1043928 logs.go:138] Found kubelet problem: Jul 30 03:45:46 old-k8s-version-111858 kubelet[1359]: E0730 03:45:46.800946 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.445289 1043928 logs.go:138] Found kubelet problem: Jul 30 03:45:50 old-k8s-version-111858 kubelet[1359]: E0730 03:45:50.798282 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.445490 1043928 logs.go:138] Found kubelet problem: Jul 30 03:46:01 old-k8s-version-111858 kubelet[1359]: E0730 03:46:01.796604 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.445692 1043928 logs.go:138] Found kubelet problem: Jul 30 03:46:02 old-k8s-version-111858 kubelet[1359]: E0730 03:46:02.834688 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.445891 1043928 logs.go:138] Found kubelet problem: Jul 30 03:46:14 old-k8s-version-111858 kubelet[1359]: E0730 03:46:14.810612 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.446086 1043928 logs.go:138] Found kubelet problem: Jul 30 03:46:17 old-k8s-version-111858 kubelet[1359]: E0730 03:46:17.796583 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.446283 1043928 logs.go:138] Found kubelet problem: Jul 30 03:46:26 old-k8s-version-111858 kubelet[1359]: E0730 03:46:26.796960 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.446506 1043928 logs.go:138] Found kubelet problem: Jul 30 03:46:32 old-k8s-version-111858 kubelet[1359]: E0730 03:46:32.801040 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.446705 1043928 logs.go:138] Found kubelet problem: Jul 30 03:46:39 old-k8s-version-111858 kubelet[1359]: E0730 03:46:39.796520 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:45.446892 1043928 logs.go:138] Found kubelet problem: Jul 30 03:46:43 old-k8s-version-111858 kubelet[1359]: E0730 03:46:43.796586 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0730 03:46:45.446903 1043928 logs.go:123] Gathering logs for etcd [ec0cdba2249f] ...
I0730 03:46:45.446918 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0cdba2249f"
I0730 03:46:45.505695 1043928 logs.go:123] Gathering logs for kube-proxy [5d46f562a227] ...
I0730 03:46:45.505728 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d46f562a227"
I0730 03:46:45.554881 1043928 logs.go:123] Gathering logs for Docker ...
I0730 03:46:45.554967 1043928 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0730 03:46:45.591910 1043928 logs.go:123] Gathering logs for container status ...
I0730 03:46:45.595756 1043928 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0730 03:46:45.695314 1043928 logs.go:123] Gathering logs for describe nodes ...
I0730 03:46:45.695408 1043928 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0730 03:46:45.983814 1043928 logs.go:123] Gathering logs for kube-controller-manager [27b9c281f645] ...
I0730 03:46:45.983886 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b9c281f645"
I0730 03:46:46.074444 1043928 logs.go:123] Gathering logs for storage-provisioner [0ab09fb9a8c8] ...
I0730 03:46:46.074521 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab09fb9a8c8"
I0730 03:46:46.114837 1043928 out.go:304] Setting ErrFile to fd 2...
I0730 03:46:46.114860 1043928 out.go:338] TERM=,COLORTERM=, which probably does not support color
W0730 03:46:46.114910 1043928 out.go:239] X Problems detected in kubelet:
W0730 03:46:46.114918 1043928 out.go:239] Jul 30 03:46:17 old-k8s-version-111858 kubelet[1359]: E0730 03:46:17.796583 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:46.114925 1043928 out.go:239] Jul 30 03:46:26 old-k8s-version-111858 kubelet[1359]: E0730 03:46:26.796960 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:46.114933 1043928 out.go:239] Jul 30 03:46:32 old-k8s-version-111858 kubelet[1359]: E0730 03:46:32.801040 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:46.114941 1043928 out.go:239] Jul 30 03:46:39 old-k8s-version-111858 kubelet[1359]: E0730 03:46:39.796520 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:46.114957 1043928 out.go:239] Jul 30 03:46:43 old-k8s-version-111858 kubelet[1359]: E0730 03:46:43.796586 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0730 03:46:46.114965 1043928 out.go:304] Setting ErrFile to fd 2...
I0730 03:46:46.114971 1043928 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0730 03:46:50.257602 1057854 node_ready.go:49] node "no-preload-811518" has status "Ready":"True"
I0730 03:46:50.257679 1057854 node_ready.go:38] duration metric: took 6.155860911s for node "no-preload-811518" to be "Ready" ...
I0730 03:46:50.257705 1057854 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0730 03:46:50.282804 1057854 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-h2hnc" in "kube-system" namespace to be "Ready" ...
I0730 03:46:50.311375 1057854 pod_ready.go:92] pod "coredns-5cfdc65f69-h2hnc" in "kube-system" namespace has status "Ready":"True"
I0730 03:46:50.311446 1057854 pod_ready.go:81] duration metric: took 28.559394ms for pod "coredns-5cfdc65f69-h2hnc" in "kube-system" namespace to be "Ready" ...
I0730 03:46:50.311473 1057854 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-811518" in "kube-system" namespace to be "Ready" ...
I0730 03:46:50.395658 1057854 pod_ready.go:92] pod "etcd-no-preload-811518" in "kube-system" namespace has status "Ready":"True"
I0730 03:46:50.395734 1057854 pod_ready.go:81] duration metric: took 84.23596ms for pod "etcd-no-preload-811518" in "kube-system" namespace to be "Ready" ...
I0730 03:46:50.395761 1057854 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-811518" in "kube-system" namespace to be "Ready" ...
I0730 03:46:50.403362 1057854 pod_ready.go:92] pod "kube-apiserver-no-preload-811518" in "kube-system" namespace has status "Ready":"True"
I0730 03:46:50.403432 1057854 pod_ready.go:81] duration metric: took 7.649834ms for pod "kube-apiserver-no-preload-811518" in "kube-system" namespace to be "Ready" ...
I0730 03:46:50.403459 1057854 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-811518" in "kube-system" namespace to be "Ready" ...
I0730 03:46:50.409377 1057854 pod_ready.go:92] pod "kube-controller-manager-no-preload-811518" in "kube-system" namespace has status "Ready":"True"
I0730 03:46:50.409454 1057854 pod_ready.go:81] duration metric: took 5.972014ms for pod "kube-controller-manager-no-preload-811518" in "kube-system" namespace to be "Ready" ...
I0730 03:46:50.409489 1057854 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nxk9j" in "kube-system" namespace to be "Ready" ...
I0730 03:46:50.471071 1057854 pod_ready.go:92] pod "kube-proxy-nxk9j" in "kube-system" namespace has status "Ready":"True"
I0730 03:46:50.471148 1057854 pod_ready.go:81] duration metric: took 61.623186ms for pod "kube-proxy-nxk9j" in "kube-system" namespace to be "Ready" ...
I0730 03:46:50.471183 1057854 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-811518" in "kube-system" namespace to be "Ready" ...
I0730 03:46:50.863820 1057854 pod_ready.go:92] pod "kube-scheduler-no-preload-811518" in "kube-system" namespace has status "Ready":"True"
I0730 03:46:50.863846 1057854 pod_ready.go:81] duration metric: took 392.62497ms for pod "kube-scheduler-no-preload-811518" in "kube-system" namespace to be "Ready" ...
I0730 03:46:50.863859 1057854 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-h4dch" in "kube-system" namespace to be "Ready" ...
I0730 03:46:52.872011 1057854 pod_ready.go:102] pod "metrics-server-78fcd8795b-h4dch" in "kube-system" namespace has status "Ready":"False"
I0730 03:46:53.571542 1057854 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.099924131s)
I0730 03:46:53.774925 1057854 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.052239746s)
I0730 03:46:53.774957 1057854 addons.go:475] Verifying addon metrics-server=true in "no-preload-811518"
I0730 03:46:53.775006 1057854 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (7.914167623s)
I0730 03:46:53.913332 1057854 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.563505025s)
I0730 03:46:53.916441 1057854 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p no-preload-811518 addons enable metrics-server
I0730 03:46:53.919221 1057854 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
I0730 03:46:53.921814 1057854 addons.go:510] duration metric: took 10.135952822s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
I0730 03:46:55.370392 1057854 pod_ready.go:102] pod "metrics-server-78fcd8795b-h4dch" in "kube-system" namespace has status "Ready":"False"
I0730 03:46:57.371124 1057854 pod_ready.go:102] pod "metrics-server-78fcd8795b-h4dch" in "kube-system" namespace has status "Ready":"False"
I0730 03:46:56.115321 1043928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0730 03:46:56.132355 1043928 api_server.go:72] duration metric: took 5m52.526084127s to wait for apiserver process to appear ...
I0730 03:46:56.132379 1043928 api_server.go:88] waiting for apiserver healthz status ...
I0730 03:46:56.132462 1043928 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0730 03:46:56.167430 1043928 logs.go:276] 2 containers: [d68a1084ef5a d1dbee0d1be9]
I0730 03:46:56.167506 1043928 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0730 03:46:56.196491 1043928 logs.go:276] 2 containers: [ec0cdba2249f 7015c3abc9b9]
I0730 03:46:56.196587 1043928 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0730 03:46:56.229447 1043928 logs.go:276] 2 containers: [2cd9c807cd34 b8b5a5f5b2cd]
I0730 03:46:56.229534 1043928 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0730 03:46:56.252389 1043928 logs.go:276] 2 containers: [ad56e41faf1a 15a7c60d1f7b]
I0730 03:46:56.252470 1043928 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0730 03:46:56.294541 1043928 logs.go:276] 2 containers: [5d46f562a227 0d145599f470]
I0730 03:46:56.294616 1043928 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0730 03:46:56.339108 1043928 logs.go:276] 2 containers: [27b9c281f645 81dd56fb259d]
I0730 03:46:56.339193 1043928 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0730 03:46:56.384112 1043928 logs.go:276] 0 containers: []
W0730 03:46:56.384135 1043928 logs.go:278] No container was found matching "kindnet"
I0730 03:46:56.384195 1043928 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0730 03:46:56.425653 1043928 logs.go:276] 2 containers: [0ab09fb9a8c8 5805a5344565]
I0730 03:46:56.425733 1043928 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0730 03:46:56.456660 1043928 logs.go:276] 1 containers: [99de9be3f2f4]
I0730 03:46:56.456695 1043928 logs.go:123] Gathering logs for kube-scheduler [ad56e41faf1a] ...
I0730 03:46:56.456707 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad56e41faf1a"
I0730 03:46:56.507485 1043928 logs.go:123] Gathering logs for storage-provisioner [5805a5344565] ...
I0730 03:46:56.507618 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5805a5344565"
I0730 03:46:56.539413 1043928 logs.go:123] Gathering logs for container status ...
I0730 03:46:56.539512 1043928 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0730 03:46:56.635429 1043928 logs.go:123] Gathering logs for kube-scheduler [15a7c60d1f7b] ...
I0730 03:46:56.635545 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a7c60d1f7b"
I0730 03:46:56.668779 1043928 logs.go:123] Gathering logs for storage-provisioner [0ab09fb9a8c8] ...
I0730 03:46:56.668811 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ab09fb9a8c8"
I0730 03:46:56.695834 1043928 logs.go:123] Gathering logs for Docker ...
I0730 03:46:56.695863 1043928 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0730 03:46:56.727593 1043928 logs.go:123] Gathering logs for kubelet ...
I0730 03:46:56.727797 1043928 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0730 03:46:56.785122 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:18 old-k8s-version-111858 kubelet[1359]: E0730 03:41:18.315016 1359 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-111858" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-111858' and this object
W0730 03:46:56.785365 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:18 old-k8s-version-111858 kubelet[1359]: E0730 03:41:18.315138 1359 reflector.go:138] object-"kube-system"/"kube-proxy-token-jp96h": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-jp96h" is forbidden: User "system:node:old-k8s-version-111858" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-111858' and this object
W0730 03:46:56.792338 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:20 old-k8s-version-111858 kubelet[1359]: E0730 03:41:20.353061 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0730 03:46:56.793137 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:20 old-k8s-version-111858 kubelet[1359]: E0730 03:41:20.398187 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.793676 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:21 old-k8s-version-111858 kubelet[1359]: E0730 03:41:21.431634 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.800615 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:34 old-k8s-version-111858 kubelet[1359]: E0730 03:41:34.817700 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0730 03:46:56.800973 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:36 old-k8s-version-111858 kubelet[1359]: E0730 03:41:36.897125 1359 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-rrwkc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-rrwkc" is forbidden: User "system:node:old-k8s-version-111858" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-111858' and this object
W0730 03:46:56.805333 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:44 old-k8s-version-111858 kubelet[1359]: E0730 03:41:44.313565 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0730 03:46:56.805720 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:44 old-k8s-version-111858 kubelet[1359]: E0730 03:41:44.681668 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.805907 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:46 old-k8s-version-111858 kubelet[1359]: E0730 03:41:46.797782 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.806553 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:51 old-k8s-version-111858 kubelet[1359]: E0730 03:41:51.772734 1359 pod_workers.go:191] Error syncing pod 61977ceb-fabc-4963-9a9a-a69ce9b13905 ("storage-provisioner_kube-system(61977ceb-fabc-4963-9a9a-a69ce9b13905)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(61977ceb-fabc-4963-9a9a-a69ce9b13905)"
W0730 03:46:56.809253 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:58 old-k8s-version-111858 kubelet[1359]: E0730 03:41:58.482463 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0730 03:46:56.811328 1043928 logs.go:138] Found kubelet problem: Jul 30 03:41:58 old-k8s-version-111858 kubelet[1359]: E0730 03:41:58.509101 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0730 03:46:56.811659 1043928 logs.go:138] Found kubelet problem: Jul 30 03:42:09 old-k8s-version-111858 kubelet[1359]: E0730 03:42:09.796542 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.811845 1043928 logs.go:138] Found kubelet problem: Jul 30 03:42:11 old-k8s-version-111858 kubelet[1359]: E0730 03:42:11.796425 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.814240 1043928 logs.go:138] Found kubelet problem: Jul 30 03:42:24 old-k8s-version-111858 kubelet[1359]: E0730 03:42:24.376186 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0730 03:46:56.814466 1043928 logs.go:138] Found kubelet problem: Jul 30 03:42:24 old-k8s-version-111858 kubelet[1359]: E0730 03:42:24.804673 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.814684 1043928 logs.go:138] Found kubelet problem: Jul 30 03:42:35 old-k8s-version-111858 kubelet[1359]: E0730 03:42:35.796278 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.814911 1043928 logs.go:138] Found kubelet problem: Jul 30 03:42:36 old-k8s-version-111858 kubelet[1359]: E0730 03:42:36.797986 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.817181 1043928 logs.go:138] Found kubelet problem: Jul 30 03:42:50 old-k8s-version-111858 kubelet[1359]: E0730 03:42:50.833614 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0730 03:46:56.817389 1043928 logs.go:138] Found kubelet problem: Jul 30 03:42:51 old-k8s-version-111858 kubelet[1359]: E0730 03:42:51.803823 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.817596 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:02 old-k8s-version-111858 kubelet[1359]: E0730 03:43:02.798080 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.817808 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:04 old-k8s-version-111858 kubelet[1359]: E0730 03:43:04.796877 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.817999 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:15 old-k8s-version-111858 kubelet[1359]: E0730 03:43:15.797776 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.820222 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:16 old-k8s-version-111858 kubelet[1359]: E0730 03:43:16.367003 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0730 03:46:56.820407 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:27 old-k8s-version-111858 kubelet[1359]: E0730 03:43:27.796546 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.820602 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:31 old-k8s-version-111858 kubelet[1359]: E0730 03:43:31.796344 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.820788 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:41 old-k8s-version-111858 kubelet[1359]: E0730 03:43:41.796601 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.820985 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:46 old-k8s-version-111858 kubelet[1359]: E0730 03:43:46.800531 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.821170 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:53 old-k8s-version-111858 kubelet[1359]: E0730 03:43:53.796486 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.821367 1043928 logs.go:138] Found kubelet problem: Jul 30 03:43:57 old-k8s-version-111858 kubelet[1359]: E0730 03:43:57.804900 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.821552 1043928 logs.go:138] Found kubelet problem: Jul 30 03:44:08 old-k8s-version-111858 kubelet[1359]: E0730 03:44:08.799152 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.821754 1043928 logs.go:138] Found kubelet problem: Jul 30 03:44:09 old-k8s-version-111858 kubelet[1359]: E0730 03:44:09.796891 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.823824 1043928 logs.go:138] Found kubelet problem: Jul 30 03:44:19 old-k8s-version-111858 kubelet[1359]: E0730 03:44:19.811955 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0730 03:46:56.824020 1043928 logs.go:138] Found kubelet problem: Jul 30 03:44:20 old-k8s-version-111858 kubelet[1359]: E0730 03:44:20.796408 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.824218 1043928 logs.go:138] Found kubelet problem: Jul 30 03:44:31 old-k8s-version-111858 kubelet[1359]: E0730 03:44:31.796408 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.824402 1043928 logs.go:138] Found kubelet problem: Jul 30 03:44:32 old-k8s-version-111858 kubelet[1359]: E0730 03:44:32.804546 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.826629 1043928 logs.go:138] Found kubelet problem: Jul 30 03:44:46 old-k8s-version-111858 kubelet[1359]: E0730 03:44:46.480773 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0730 03:46:56.826814 1043928 logs.go:138] Found kubelet problem: Jul 30 03:44:47 old-k8s-version-111858 kubelet[1359]: E0730 03:44:47.796662 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.827009 1043928 logs.go:138] Found kubelet problem: Jul 30 03:44:57 old-k8s-version-111858 kubelet[1359]: E0730 03:44:57.796558 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.827195 1043928 logs.go:138] Found kubelet problem: Jul 30 03:45:01 old-k8s-version-111858 kubelet[1359]: E0730 03:45:01.796960 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.827390 1043928 logs.go:138] Found kubelet problem: Jul 30 03:45:08 old-k8s-version-111858 kubelet[1359]: E0730 03:45:08.804919 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.827591 1043928 logs.go:138] Found kubelet problem: Jul 30 03:45:12 old-k8s-version-111858 kubelet[1359]: E0730 03:45:12.797499 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.827790 1043928 logs.go:138] Found kubelet problem: Jul 30 03:45:23 old-k8s-version-111858 kubelet[1359]: E0730 03:45:23.796294 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.827976 1043928 logs.go:138] Found kubelet problem: Jul 30 03:45:23 old-k8s-version-111858 kubelet[1359]: E0730 03:45:23.801724 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.828173 1043928 logs.go:138] Found kubelet problem: Jul 30 03:45:35 old-k8s-version-111858 kubelet[1359]: E0730 03:45:35.796294 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.828357 1043928 logs.go:138] Found kubelet problem: Jul 30 03:45:35 old-k8s-version-111858 kubelet[1359]: E0730 03:45:35.797085 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.828552 1043928 logs.go:138] Found kubelet problem: Jul 30 03:45:46 old-k8s-version-111858 kubelet[1359]: E0730 03:45:46.800946 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.828736 1043928 logs.go:138] Found kubelet problem: Jul 30 03:45:50 old-k8s-version-111858 kubelet[1359]: E0730 03:45:50.798282 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.828934 1043928 logs.go:138] Found kubelet problem: Jul 30 03:46:01 old-k8s-version-111858 kubelet[1359]: E0730 03:46:01.796604 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.829117 1043928 logs.go:138] Found kubelet problem: Jul 30 03:46:02 old-k8s-version-111858 kubelet[1359]: E0730 03:46:02.834688 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.829312 1043928 logs.go:138] Found kubelet problem: Jul 30 03:46:14 old-k8s-version-111858 kubelet[1359]: E0730 03:46:14.810612 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.829498 1043928 logs.go:138] Found kubelet problem: Jul 30 03:46:17 old-k8s-version-111858 kubelet[1359]: E0730 03:46:17.796583 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.829699 1043928 logs.go:138] Found kubelet problem: Jul 30 03:46:26 old-k8s-version-111858 kubelet[1359]: E0730 03:46:26.796960 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.829883 1043928 logs.go:138] Found kubelet problem: Jul 30 03:46:32 old-k8s-version-111858 kubelet[1359]: E0730 03:46:32.801040 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.830082 1043928 logs.go:138] Found kubelet problem: Jul 30 03:46:39 old-k8s-version-111858 kubelet[1359]: E0730 03:46:39.796520 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.830266 1043928 logs.go:138] Found kubelet problem: Jul 30 03:46:43 old-k8s-version-111858 kubelet[1359]: E0730 03:46:43.796586 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:56.830463 1043928 logs.go:138] Found kubelet problem: Jul 30 03:46:51 old-k8s-version-111858 kubelet[1359]: E0730 03:46:51.796690 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
I0730 03:46:56.830476 1043928 logs.go:123] Gathering logs for kube-apiserver [d1dbee0d1be9] ...
I0730 03:46:56.830491 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1dbee0d1be9"
I0730 03:46:56.910378 1043928 logs.go:123] Gathering logs for coredns [b8b5a5f5b2cd] ...
I0730 03:46:56.910420 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b5a5f5b2cd"
I0730 03:46:56.941772 1043928 logs.go:123] Gathering logs for kube-proxy [5d46f562a227] ...
I0730 03:46:56.941803 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d46f562a227"
I0730 03:46:56.963900 1043928 logs.go:123] Gathering logs for kube-controller-manager [81dd56fb259d] ...
I0730 03:46:56.963931 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81dd56fb259d"
I0730 03:46:57.029152 1043928 logs.go:123] Gathering logs for kube-apiserver [d68a1084ef5a] ...
I0730 03:46:57.029275 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d68a1084ef5a"
I0730 03:46:57.089335 1043928 logs.go:123] Gathering logs for etcd [7015c3abc9b9] ...
I0730 03:46:57.089375 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7015c3abc9b9"
I0730 03:46:57.123687 1043928 logs.go:123] Gathering logs for coredns [2cd9c807cd34] ...
I0730 03:46:57.123721 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cd9c807cd34"
I0730 03:46:57.154762 1043928 logs.go:123] Gathering logs for kube-proxy [0d145599f470] ...
I0730 03:46:57.154835 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d145599f470"
I0730 03:46:57.176795 1043928 logs.go:123] Gathering logs for kube-controller-manager [27b9c281f645] ...
I0730 03:46:57.176826 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27b9c281f645"
I0730 03:46:57.215875 1043928 logs.go:123] Gathering logs for kubernetes-dashboard [99de9be3f2f4] ...
I0730 03:46:57.215912 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99de9be3f2f4"
I0730 03:46:57.251483 1043928 logs.go:123] Gathering logs for dmesg ...
I0730 03:46:57.251512 1043928 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0730 03:46:57.275689 1043928 logs.go:123] Gathering logs for describe nodes ...
I0730 03:46:57.275721 1043928 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0730 03:46:57.516078 1043928 logs.go:123] Gathering logs for etcd [ec0cdba2249f] ...
I0730 03:46:57.516115 1043928 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0cdba2249f"
I0730 03:46:57.562197 1043928 out.go:304] Setting ErrFile to fd 2...
I0730 03:46:57.562246 1043928 out.go:338] TERM=,COLORTERM=, which probably does not support color
W0730 03:46:57.562374 1043928 out.go:239] X Problems detected in kubelet:
W0730 03:46:57.562391 1043928 out.go:239] Jul 30 03:46:26 old-k8s-version-111858 kubelet[1359]: E0730 03:46:26.796960 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:57.562436 1043928 out.go:239] Jul 30 03:46:32 old-k8s-version-111858 kubelet[1359]: E0730 03:46:32.801040 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:57.562450 1043928 out.go:239] Jul 30 03:46:39 old-k8s-version-111858 kubelet[1359]: E0730 03:46:39.796520 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0730 03:46:57.562464 1043928 out.go:239] Jul 30 03:46:43 old-k8s-version-111858 kubelet[1359]: E0730 03:46:43.796586 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0730 03:46:57.562471 1043928 out.go:239] Jul 30 03:46:51 old-k8s-version-111858 kubelet[1359]: E0730 03:46:51.796690 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
I0730 03:46:57.562477 1043928 out.go:304] Setting ErrFile to fd 2...
I0730 03:46:57.562502 1043928 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0730 03:46:59.371630 1057854 pod_ready.go:102] pod "metrics-server-78fcd8795b-h4dch" in "kube-system" namespace has status "Ready":"False"
I0730 03:47:01.872098 1057854 pod_ready.go:102] pod "metrics-server-78fcd8795b-h4dch" in "kube-system" namespace has status "Ready":"False"
I0730 03:47:07.562678 1043928 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I0730 03:47:07.572110 1043928 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
ok
I0730 03:47:07.574433 1043928 out.go:177]
W0730 03:47:07.576337 1043928 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0730 03:47:07.576386 1043928 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
W0730 03:47:07.576407 1043928 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
W0730 03:47:07.576413 1043928 out.go:239] *
W0730 03:47:07.577520 1043928 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0730 03:47:07.578733 1043928 out.go:177]
==> Docker <==
Jul 30 03:41:58 old-k8s-version-111858 dockerd[1056]: time="2024-07-30T03:41:58.157999688Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
Jul 30 03:41:58 old-k8s-version-111858 dockerd[1056]: time="2024-07-30T03:41:58.478926936Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
Jul 30 03:41:58 old-k8s-version-111858 dockerd[1056]: time="2024-07-30T03:41:58.479068391Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
Jul 30 03:41:58 old-k8s-version-111858 dockerd[1056]: time="2024-07-30T03:41:58.479103476Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
Jul 30 03:41:58 old-k8s-version-111858 dockerd[1056]: time="2024-07-30T03:41:58.506278969Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
Jul 30 03:41:58 old-k8s-version-111858 dockerd[1056]: time="2024-07-30T03:41:58.506343174Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
Jul 30 03:41:58 old-k8s-version-111858 dockerd[1056]: time="2024-07-30T03:41:58.508345923Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
Jul 30 03:42:24 old-k8s-version-111858 dockerd[1056]: time="2024-07-30T03:42:24.059780596Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
Jul 30 03:42:24 old-k8s-version-111858 dockerd[1056]: time="2024-07-30T03:42:24.373202264Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
Jul 30 03:42:24 old-k8s-version-111858 dockerd[1056]: time="2024-07-30T03:42:24.373344474Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
Jul 30 03:42:24 old-k8s-version-111858 dockerd[1056]: time="2024-07-30T03:42:24.373378616Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
Jul 30 03:42:50 old-k8s-version-111858 dockerd[1056]: time="2024-07-30T03:42:50.824296141Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
Jul 30 03:42:50 old-k8s-version-111858 dockerd[1056]: time="2024-07-30T03:42:50.824775365Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
Jul 30 03:42:50 old-k8s-version-111858 dockerd[1056]: time="2024-07-30T03:42:50.832623474Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
Jul 30 03:43:16 old-k8s-version-111858 dockerd[1056]: time="2024-07-30T03:43:16.064387615Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
Jul 30 03:43:16 old-k8s-version-111858 dockerd[1056]: time="2024-07-30T03:43:16.364142173Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
Jul 30 03:43:16 old-k8s-version-111858 dockerd[1056]: time="2024-07-30T03:43:16.364350622Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
Jul 30 03:43:16 old-k8s-version-111858 dockerd[1056]: time="2024-07-30T03:43:16.364386101Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
Jul 30 03:44:19 old-k8s-version-111858 dockerd[1056]: time="2024-07-30T03:44:19.808870621Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
Jul 30 03:44:19 old-k8s-version-111858 dockerd[1056]: time="2024-07-30T03:44:19.808919646Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
Jul 30 03:44:19 old-k8s-version-111858 dockerd[1056]: time="2024-07-30T03:44:19.810866511Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
Jul 30 03:44:46 old-k8s-version-111858 dockerd[1056]: time="2024-07-30T03:44:46.172916027Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
Jul 30 03:44:46 old-k8s-version-111858 dockerd[1056]: time="2024-07-30T03:44:46.477759902Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
Jul 30 03:44:46 old-k8s-version-111858 dockerd[1056]: time="2024-07-30T03:44:46.477863959Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
Jul 30 03:44:46 old-k8s-version-111858 dockerd[1056]: time="2024-07-30T03:44:46.477895269Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
0ab09fb9a8c85 ba04bb24b9575 5 minutes ago Running storage-provisioner 3 6184c1ed055bc storage-provisioner
99de9be3f2f40 kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 5 minutes ago Running kubernetes-dashboard 0 d482136abc594 kubernetes-dashboard-cd95d586-chkf2
877c1d93d6b3f 1611cd07b61d5 5 minutes ago Running busybox 1 7065421d1e8ae busybox
2cd9c807cd34c db91994f4ee8f 5 minutes ago Running coredns 1 e48755b1ebd1c coredns-74ff55c5b-t6jx8
5805a53445650 ba04bb24b9575 5 minutes ago Exited storage-provisioner 2 6184c1ed055bc storage-provisioner
5d46f562a227e 25a5233254979 5 minutes ago Running kube-proxy 1 1b4bdd3796aee kube-proxy-6kqkd
ad56e41faf1a0 e7605f88f17d6 6 minutes ago Running kube-scheduler 1 752a798986d40 kube-scheduler-old-k8s-version-111858
27b9c281f6459 1df8a2b116bd1 6 minutes ago Running kube-controller-manager 1 7c98c68b7421c kube-controller-manager-old-k8s-version-111858
ec0cdba2249ff 05b738aa1bc63 6 minutes ago Running etcd 1 43dc546550877 etcd-old-k8s-version-111858
d68a1084ef5a7 2c08bbbc02d3a 6 minutes ago Running kube-apiserver 1 79c187b8b1922 kube-apiserver-old-k8s-version-111858
72888abe15c85 gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 6 minutes ago Exited busybox 0 3afcd3300f608 busybox
0d145599f4702 25a5233254979 7 minutes ago Exited kube-proxy 0 eaf1a9b28cb37 kube-proxy-6kqkd
b8b5a5f5b2cd5 db91994f4ee8f 7 minutes ago Exited coredns 0 d5b5bab1ec90d coredns-74ff55c5b-t6jx8
d1dbee0d1be92 2c08bbbc02d3a 8 minutes ago Exited kube-apiserver 0 f57c0f40d3c57 kube-apiserver-old-k8s-version-111858
15a7c60d1f7b4 e7605f88f17d6 8 minutes ago Exited kube-scheduler 0 dfb28171e95ac kube-scheduler-old-k8s-version-111858
81dd56fb259db 1df8a2b116bd1 8 minutes ago Exited kube-controller-manager 0 eb69d002290c7 kube-controller-manager-old-k8s-version-111858
7015c3abc9b9c 05b738aa1bc63 8 minutes ago Exited etcd 0 73a319c8d6056 etcd-old-k8s-version-111858
==> coredns [2cd9c807cd34] <==
[INFO] plugin/ready: Still waiting on: "kubernetes"
.:53
[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:46569 - 31877 "HINFO IN 7125148584389867326.4811010041222841265. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.045976392s
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
I0730 03:41:50.873244 1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-07-30 03:41:20.872187119 +0000 UTC m=+0.040077297) (total time: 30.000608769s):
Trace[2019727887]: [30.000608769s] [30.000608769s] END
I0730 03:41:50.873263 1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-07-30 03:41:20.872775709 +0000 UTC m=+0.040665879) (total time: 30.000374621s):
Trace[1427131847]: [30.000374621s] [30.000374621s] END
E0730 03:41:50.873316 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
E0730 03:41:50.873286 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0730 03:41:50.873537 1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-07-30 03:41:20.873031994 +0000 UTC m=+0.040922172) (total time: 30.000489238s):
Trace[911902081]: [30.000489238s] [30.000489238s] END
E0730 03:41:50.873549 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
==> coredns [b8b5a5f5b2cd] <==
I0730 03:40:06.278317 1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-07-30 03:39:36.277929613 +0000 UTC m=+0.127137276) (total time: 30.000275327s):
Trace[2019727887]: [30.000275327s] [30.000275327s] END
E0730 03:40:06.278350 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0730 03:40:06.278741 1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-07-30 03:39:36.278065972 +0000 UTC m=+0.127273627) (total time: 30.000659448s):
Trace[939984059]: [30.000659448s] [30.000659448s] END
E0730 03:40:06.278749 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0730 03:40:06.278811 1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-07-30 03:39:36.277559958 +0000 UTC m=+0.126767621) (total time: 30.001242573s):
Trace[911902081]: [30.001242573s] [30.001242573s] END
E0730 03:40:06.278815 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
.:53
[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:49798 - 48689 "HINFO IN 2728577730469588433.4966317850289246952. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013227166s
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> describe nodes <==
Name: old-k8s-version-111858
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=old-k8s-version-111858
kubernetes.io/os=linux
minikube.k8s.io/commit=b13baeaf4895dcc6a8c5d0ab64a27ff86dff4ae3
minikube.k8s.io/name=old-k8s-version-111858
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_07_30T03_39_16_0700
minikube.k8s.io/version=v1.33.1
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 30 Jul 2024 03:39:12 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: old-k8s-version-111858
AcquireTime: <unset>
RenewTime: Tue, 30 Jul 2024 03:47:01 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 30 Jul 2024 03:42:09 +0000 Tue, 30 Jul 2024 03:39:03 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 30 Jul 2024 03:42:09 +0000 Tue, 30 Jul 2024 03:39:03 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 30 Jul 2024 03:42:09 +0000 Tue, 30 Jul 2024 03:39:03 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 30 Jul 2024 03:42:09 +0000 Tue, 30 Jul 2024 03:39:30 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.85.2
Hostname: old-k8s-version-111858
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022368Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022368Ki
pods: 110
System Info:
Machine ID: 0a2b35241260456991dba3a6cb319053
System UUID: 9ab25f44-25eb-4ba8-af1b-6f04b782b25b
Boot ID: 40f7f4e5-3dd4-4b42-9db6-4265e768ba51
Kernel Version: 5.15.0-1065-aws
OS Image: Ubuntu 22.04.4 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: docker://27.1.1
Kubelet Version: v1.20.0
Kube-Proxy Version: v1.20.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (11 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m37s
kube-system coredns-74ff55c5b-t6jx8 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (2%!)(MISSING) 7m36s
kube-system etcd-old-k8s-version-111858 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (1%!)(MISSING) 0 (0%!)(MISSING) 7m48s
kube-system kube-apiserver-old-k8s-version-111858 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m48s
kube-system kube-controller-manager-old-k8s-version-111858 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m48s
kube-system kube-proxy-6kqkd 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m36s
kube-system kube-scheduler-old-k8s-version-111858 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m48s
kube-system metrics-server-9975d5f86-42nxj 100m (5%!)(MISSING) 0 (0%!)(MISSING) 200Mi (2%!)(MISSING) 0 (0%!)(MISSING) 6m25s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m33s
kubernetes-dashboard dashboard-metrics-scraper-8d5bb5db8-vrtzs 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m32s
kubernetes-dashboard kubernetes-dashboard-cd95d586-chkf2 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m32s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%!)(MISSING) 0 (0%!)(MISSING)
memory 370Mi (4%!)(MISSING) 170Mi (2%!)(MISSING)
ephemeral-storage 100Mi (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-32Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-64Ki 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 8m8s (x5 over 8m8s) kubelet Node old-k8s-version-111858 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m8s (x5 over 8m8s) kubelet Node old-k8s-version-111858 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m8s (x5 over 8m8s) kubelet Node old-k8s-version-111858 status is now: NodeHasSufficientPID
Normal Starting 7m49s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 7m48s kubelet Node old-k8s-version-111858 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 7m48s kubelet Node old-k8s-version-111858 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 7m48s kubelet Node old-k8s-version-111858 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 7m48s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 7m38s kubelet Node old-k8s-version-111858 status is now: NodeReady
Normal Starting 7m32s kube-proxy Starting kube-proxy.
Normal Starting 6m2s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 6m2s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 6m1s (x8 over 6m2s) kubelet Node old-k8s-version-111858 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 6m1s (x8 over 6m2s) kubelet Node old-k8s-version-111858 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 6m1s (x7 over 6m2s) kubelet Node old-k8s-version-111858 status is now: NodeHasSufficientPID
Normal Starting 5m47s kube-proxy Starting kube-proxy.
==> dmesg <==
[ +0.001286] FS-Cache: O-key=[8] 'e4793b0000000000'
[ +0.000990] FS-Cache: N-cookie c=0000011a [p=00000111 fl=2 nc=0 na=1]
[ +0.000942] FS-Cache: N-cookie d=000000005e8a4d65{9p.inode} n=000000009f90ec84
[ +0.001048] FS-Cache: N-key=[8] 'e4793b0000000000'
[ +0.003524] FS-Cache: Duplicate cookie detected
[ +0.000715] FS-Cache: O-cookie c=00000114 [p=00000111 fl=226 nc=0 na=1]
[ +0.000970] FS-Cache: O-cookie d=000000005e8a4d65{9p.inode} n=0000000011957361
[ +0.001099] FS-Cache: O-key=[8] 'e4793b0000000000'
[ +0.000709] FS-Cache: N-cookie c=0000011b [p=00000111 fl=2 nc=0 na=1]
[ +0.000951] FS-Cache: N-cookie d=000000005e8a4d65{9p.inode} n=0000000011511e34
[ +0.001151] FS-Cache: N-key=[8] 'e4793b0000000000'
[ +3.182056] FS-Cache: Duplicate cookie detected
[ +0.000747] FS-Cache: O-cookie c=00000112 [p=00000111 fl=226 nc=0 na=1]
[ +0.001013] FS-Cache: O-cookie d=000000005e8a4d65{9p.inode} n=000000000ffb7ace
[ +0.001158] FS-Cache: O-key=[8] 'e3793b0000000000'
[ +0.000738] FS-Cache: N-cookie c=0000011d [p=00000111 fl=2 nc=0 na=1]
[ +0.000992] FS-Cache: N-cookie d=000000005e8a4d65{9p.inode} n=000000005f17544b
[ +0.001094] FS-Cache: N-key=[8] 'e3793b0000000000'
[ +0.353646] FS-Cache: Duplicate cookie detected
[ +0.000739] FS-Cache: O-cookie c=00000117 [p=00000111 fl=226 nc=0 na=1]
[ +0.000991] FS-Cache: O-cookie d=000000005e8a4d65{9p.inode} n=00000000d8d2a3d7
[ +0.001056] FS-Cache: O-key=[8] 'e9793b0000000000'
[ +0.000754] FS-Cache: N-cookie c=0000011e [p=00000111 fl=2 nc=0 na=1]
[ +0.000949] FS-Cache: N-cookie d=000000005e8a4d65{9p.inode} n=000000009f90ec84
[ +0.001047] FS-Cache: N-key=[8] 'e9793b0000000000'
==> etcd [7015c3abc9b9] <==
raft2024/07/30 03:39:01 INFO: 9f0758e1c58a86ed became candidate at term 2
raft2024/07/30 03:39:01 INFO: 9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2
raft2024/07/30 03:39:01 INFO: 9f0758e1c58a86ed became leader at term 2
raft2024/07/30 03:39:01 INFO: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
2024-07-30 03:39:01.877261 I | etcdserver: setting up the initial cluster version to 3.4
2024-07-30 03:39:01.878084 N | etcdserver/membership: set the initial cluster version to 3.4
2024-07-30 03:39:01.878182 I | etcdserver/api: enabled capabilities for version 3.4
2024-07-30 03:39:01.878210 I | etcdserver: published {Name:old-k8s-version-111858 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
2024-07-30 03:39:01.878329 I | embed: ready to serve client requests
2024-07-30 03:39:01.878399 I | embed: ready to serve client requests
2024-07-30 03:39:01.886998 I | embed: serving client requests on 192.168.85.2:2379
2024-07-30 03:39:01.902523 I | embed: serving client requests on 127.0.0.1:2379
2024-07-30 03:39:14.571863 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-30 03:39:22.562967 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-30 03:39:26.657656 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-30 03:39:36.658061 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-30 03:39:46.658043 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-30 03:39:56.657789 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-30 03:40:06.657865 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-30 03:40:16.658017 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-30 03:40:26.658071 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-30 03:40:36.658077 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-30 03:40:43.251148 N | pkg/osutil: received terminated signal, shutting down...
WARNING: 2024/07/30 03:40:43 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
2024-07-30 03:40:43.302696 I | etcdserver: skipped leadership transfer for single voting member cluster
==> etcd [ec0cdba2249f] <==
2024-07-30 03:43:02.527706 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-30 03:43:12.527659 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-30 03:43:22.527747 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-30 03:43:32.527708 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-30 03:43:42.527750 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-30 03:43:52.527694 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-30 03:44:02.527739 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-30 03:44:12.527608 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-30 03:44:22.529194 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-30 03:44:32.527673 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-30 03:44:42.527622 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-30 03:44:52.527659 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-30 03:45:02.538343 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-30 03:45:12.527652 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-30 03:45:22.528024 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-30 03:45:32.527664 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-30 03:45:42.527854 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-30 03:45:52.527703 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-30 03:46:02.527857 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-30 03:46:12.550476 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-30 03:46:22.527972 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-30 03:46:32.527820 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-30 03:46:42.527779 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-30 03:46:52.527685 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-30 03:47:02.527823 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> kernel <==
03:47:09 up 23:29, 0 users, load average: 2.51, 3.05, 3.76
Linux old-k8s-version-111858 5.15.0-1065-aws #71~20.04.1-Ubuntu SMP Fri Jun 28 19:59:49 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.4 LTS"
==> kube-apiserver [d1dbee0d1be9] <==
I0730 03:40:43.326797 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
I0730 03:40:43.326935 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
I0730 03:40:43.327061 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
I0730 03:40:43.329111 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
W0730 03:40:43.329364 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0730 03:40:43.329437 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0730 03:40:43.329501 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0730 03:40:43.329634 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0730 03:40:43.329691 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0730 03:40:43.329752 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0730 03:40:43.329802 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0730 03:40:43.350051 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0730 03:40:43.350393 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0730 03:40:43.350601 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0730 03:40:43.350796 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0730 03:40:43.351019 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0730 03:40:43.350825 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0730 03:40:43.351046 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0730 03:40:43.351085 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0730 03:40:43.351705 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0730 03:40:43.351880 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0730 03:40:43.352046 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0730 03:40:43.352222 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0730 03:40:43.352397 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0730 03:40:43.352568 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
==> kube-apiserver [d68a1084ef5a] <==
I0730 03:43:38.609837 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0730 03:43:38.609846 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0730 03:44:11.468987 1 client.go:360] parsed scheme: "passthrough"
I0730 03:44:11.469028 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0730 03:44:11.469037 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0730 03:44:21.224872 1 handler_proxy.go:102] no RequestInfo found in the context
E0730 03:44:21.224949 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0730 03:44:21.224958 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0730 03:44:50.128167 1 client.go:360] parsed scheme: "passthrough"
I0730 03:44:50.128214 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0730 03:44:50.128223 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0730 03:45:30.500219 1 client.go:360] parsed scheme: "passthrough"
I0730 03:45:30.501262 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0730 03:45:30.501762 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0730 03:46:05.293317 1 client.go:360] parsed scheme: "passthrough"
I0730 03:46:05.293361 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0730 03:46:05.293371 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0730 03:46:19.349160 1 handler_proxy.go:102] no RequestInfo found in the context
E0730 03:46:19.349261 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0730 03:46:19.349275 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0730 03:46:44.024010 1 client.go:360] parsed scheme: "passthrough"
I0730 03:46:44.024072 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0730 03:46:44.024081 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-controller-manager [27b9c281f645] <==
W0730 03:42:42.459327 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0730 03:43:08.506759 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0730 03:43:14.109676 1 request.go:655] Throttling request took 1.048207498s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
W0730 03:43:14.961149 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0730 03:43:39.008854 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0730 03:43:46.611662 1 request.go:655] Throttling request took 1.048070874s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
W0730 03:43:47.464359 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0730 03:44:09.512264 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0730 03:44:19.116239 1 request.go:655] Throttling request took 1.048312535s, request: GET:https://192.168.85.2:8443/apis/coordination.k8s.io/v1beta1?timeout=32s
W0730 03:44:19.968069 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0730 03:44:40.017267 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0730 03:44:51.618509 1 request.go:655] Throttling request took 1.048458903s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
W0730 03:44:52.470780 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0730 03:45:10.519235 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0730 03:45:24.124420 1 request.go:655] Throttling request took 1.04805211s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
W0730 03:45:24.979764 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0730 03:45:41.021087 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0730 03:45:56.630206 1 request.go:655] Throttling request took 1.048240659s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
W0730 03:45:57.481614 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0730 03:46:11.523062 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0730 03:46:29.132033 1 request.go:655] Throttling request took 1.048359195s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
W0730 03:46:29.983500 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0730 03:46:42.025173 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0730 03:47:01.633835 1 request.go:655] Throttling request took 1.048059304s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
W0730 03:47:02.485387 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
==> kube-controller-manager [81dd56fb259d] <==
I0730 03:39:32.859260 1 shared_informer.go:247] Caches are synced for disruption
I0730 03:39:32.859267 1 disruption.go:339] Sending events to api server.
I0730 03:39:32.859886 1 shared_informer.go:247] Caches are synced for endpoint_slice
I0730 03:39:32.860382 1 shared_informer.go:247] Caches are synced for stateful set
I0730 03:39:32.860453 1 shared_informer.go:247] Caches are synced for daemon sets
I0730 03:39:32.860471 1 shared_informer.go:247] Caches are synced for deployment
I0730 03:39:32.861822 1 shared_informer.go:247] Caches are synced for HPA
I0730 03:39:32.913786 1 shared_informer.go:247] Caches are synced for attach detach
I0730 03:39:32.922684 1 shared_informer.go:247] Caches are synced for service account
I0730 03:39:32.927294 1 shared_informer.go:247] Caches are synced for namespace
E0730 03:39:32.964933 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
I0730 03:39:32.966322 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
I0730 03:39:33.008651 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0730 03:39:33.074349 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-t6jx8"
I0730 03:39:33.097974 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-6kqkd"
I0730 03:39:33.146553 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-tns8l"
E0730 03:39:33.202843 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
I0730 03:39:33.330658 1 shared_informer.go:247] Caches are synced for garbage collector
I0730 03:39:33.330681 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
E0730 03:39:33.402949 1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"4a665fe5-5daf-4a3e-a37a-6ea459eee4d7", ResourceVersion:"270", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63857907555, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001a5b080), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001a5b0a0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0x4001a5b0c0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4001a4cf00), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001a5b
0e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001a5b100), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001a5b140)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001a64480), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001a62858), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x400067f260), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4000116d50)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001a628a8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
I0730 03:39:33.412304 1 shared_informer.go:247] Caches are synced for garbage collector
I0730 03:39:35.279217 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
I0730 03:39:35.320420 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-tns8l"
I0730 03:40:41.828188 1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
I0730 03:40:43.052208 1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-9975d5f86-42nxj"
==> kube-proxy [0d145599f470] <==
I0730 03:39:36.676990 1 node.go:172] Successfully retrieved node IP: 192.168.85.2
I0730 03:39:36.677309 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
W0730 03:39:36.697031 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0730 03:39:36.697355 1 server_others.go:185] Using iptables Proxier.
I0730 03:39:36.697759 1 server.go:650] Version: v1.20.0
I0730 03:39:36.698969 1 config.go:315] Starting service config controller
I0730 03:39:36.699101 1 shared_informer.go:240] Waiting for caches to sync for service config
I0730 03:39:36.699267 1 config.go:224] Starting endpoint slice config controller
I0730 03:39:36.699351 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0730 03:39:36.799367 1 shared_informer.go:247] Caches are synced for service config
I0730 03:39:36.799516 1 shared_informer.go:247] Caches are synced for endpoint slice config
==> kube-proxy [5d46f562a227] <==
I0730 03:41:21.157010 1 node.go:172] Successfully retrieved node IP: 192.168.85.2
I0730 03:41:21.157201 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
W0730 03:41:21.183180 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0730 03:41:21.183393 1 server_others.go:185] Using iptables Proxier.
I0730 03:41:21.183732 1 server.go:650] Version: v1.20.0
I0730 03:41:21.184636 1 config.go:315] Starting service config controller
I0730 03:41:21.184806 1 shared_informer.go:240] Waiting for caches to sync for service config
I0730 03:41:21.185552 1 config.go:224] Starting endpoint slice config controller
I0730 03:41:21.186993 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0730 03:41:21.286938 1 shared_informer.go:247] Caches are synced for service config
I0730 03:41:21.287141 1 shared_informer.go:247] Caches are synced for endpoint slice config
==> kube-scheduler [15a7c60d1f7b] <==
I0730 03:39:05.578115 1 serving.go:331] Generated self-signed cert in-memory
W0730 03:39:12.591305 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0730 03:39:12.591516 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0730 03:39:12.591562 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0730 03:39:12.591683 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0730 03:39:12.655570 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0730 03:39:12.656901 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0730 03:39:12.656930 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0730 03:39:12.656956 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0730 03:39:12.665204 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0730 03:39:12.669726 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0730 03:39:12.669965 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0730 03:39:12.670164 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0730 03:39:12.670283 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0730 03:39:12.670469 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0730 03:39:12.671826 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0730 03:39:12.671930 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0730 03:39:12.672352 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0730 03:39:12.672584 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0730 03:39:12.672891 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0730 03:39:12.673090 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0730 03:39:13.530730 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0730 03:39:13.735155 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0730 03:39:15.557900 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kube-scheduler [ad56e41faf1a] <==
I0730 03:41:11.740665 1 serving.go:331] Generated self-signed cert in-memory
W0730 03:41:18.230038 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0730 03:41:18.230075 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0730 03:41:18.230092 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0730 03:41:18.230098 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0730 03:41:18.479283 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0730 03:41:18.479318 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0730 03:41:18.480937 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0730 03:41:18.505640 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0730 03:41:18.581338 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Jul 30 03:44:46 old-k8s-version-111858 kubelet[1359]: E0730 03:44:46.480139 1359 kuberuntime_image.go:51] Pull image "registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/
Jul 30 03:44:46 old-k8s-version-111858 kubelet[1359]: E0730 03:44:46.480576 1359 kuberuntime_manager.go:829] container &Container{Name:dashboard-metrics-scraper,Image:registry.k8s.io/echoserver:1.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:8000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kubernetes-dashboard-token-rrwkc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 8000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:nil,Lifecycle:nil,Terminatio
nMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:*2001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76): ErrImagePull: rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/
Jul 30 03:44:46 old-k8s-version-111858 kubelet[1359]: E0730 03:44:46.480773 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
Jul 30 03:44:47 old-k8s-version-111858 kubelet[1359]: E0730 03:44:47.796662 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 30 03:44:57 old-k8s-version-111858 kubelet[1359]: E0730 03:44:57.796558 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Jul 30 03:45:01 old-k8s-version-111858 kubelet[1359]: E0730 03:45:01.796960 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 30 03:45:08 old-k8s-version-111858 kubelet[1359]: E0730 03:45:08.804919 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Jul 30 03:45:12 old-k8s-version-111858 kubelet[1359]: E0730 03:45:12.797499 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 30 03:45:23 old-k8s-version-111858 kubelet[1359]: E0730 03:45:23.796294 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Jul 30 03:45:23 old-k8s-version-111858 kubelet[1359]: E0730 03:45:23.801724 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 30 03:45:35 old-k8s-version-111858 kubelet[1359]: E0730 03:45:35.796294 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Jul 30 03:45:35 old-k8s-version-111858 kubelet[1359]: E0730 03:45:35.797085 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 30 03:45:46 old-k8s-version-111858 kubelet[1359]: E0730 03:45:46.800946 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Jul 30 03:45:50 old-k8s-version-111858 kubelet[1359]: E0730 03:45:50.798282 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 30 03:46:01 old-k8s-version-111858 kubelet[1359]: E0730 03:46:01.796604 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Jul 30 03:46:02 old-k8s-version-111858 kubelet[1359]: E0730 03:46:02.834688 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 30 03:46:14 old-k8s-version-111858 kubelet[1359]: E0730 03:46:14.810612 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Jul 30 03:46:17 old-k8s-version-111858 kubelet[1359]: E0730 03:46:17.796583 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 30 03:46:26 old-k8s-version-111858 kubelet[1359]: E0730 03:46:26.796960 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Jul 30 03:46:32 old-k8s-version-111858 kubelet[1359]: E0730 03:46:32.801040 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 30 03:46:39 old-k8s-version-111858 kubelet[1359]: E0730 03:46:39.796520 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Jul 30 03:46:43 old-k8s-version-111858 kubelet[1359]: E0730 03:46:43.796586 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 30 03:46:51 old-k8s-version-111858 kubelet[1359]: E0730 03:46:51.796690 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Jul 30 03:46:58 old-k8s-version-111858 kubelet[1359]: E0730 03:46:58.796675 1359 pod_workers.go:191] Error syncing pod 05099a10-4bdb-4a47-b1e1-252fbd453417 ("metrics-server-9975d5f86-42nxj_kube-system(05099a10-4bdb-4a47-b1e1-252fbd453417)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 30 03:47:04 old-k8s-version-111858 kubelet[1359]: E0730 03:47:04.800136 1359 pod_workers.go:191] Error syncing pod 30430794-4070-4f22-8865-273973031b76 ("dashboard-metrics-scraper-8d5bb5db8-vrtzs_kubernetes-dashboard(30430794-4070-4f22-8865-273973031b76)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
==> kubernetes-dashboard [99de9be3f2f4] <==
2024/07/30 03:41:44 Starting overwatch
2024/07/30 03:41:44 Using namespace: kubernetes-dashboard
2024/07/30 03:41:44 Using in-cluster config to connect to apiserver
2024/07/30 03:41:44 Using secret token for csrf signing
2024/07/30 03:41:44 Initializing csrf token from kubernetes-dashboard-csrf secret
2024/07/30 03:41:44 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2024/07/30 03:41:44 Successful initial request to the apiserver, version: v1.20.0
2024/07/30 03:41:44 Generating JWE encryption key
2024/07/30 03:41:44 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2024/07/30 03:41:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2024/07/30 03:41:44 Initializing JWE encryption key from synchronized object
2024/07/30 03:41:44 Creating in-cluster Sidecar client
2024/07/30 03:41:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/30 03:41:44 Serving insecurely on HTTP port: 9090
2024/07/30 03:42:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/30 03:42:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/30 03:43:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/30 03:43:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/30 03:44:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/30 03:44:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/30 03:45:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/30 03:45:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/30 03:46:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/30 03:46:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
==> storage-provisioner [0ab09fb9a8c8] <==
I0730 03:42:04.991235 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0730 03:42:05.018715 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0730 03:42:05.018851 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0730 03:42:22.491242 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0730 03:42:22.491648 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-111858_3839e1ed-4666-4020-8955-9b76f2f8a69e!
I0730 03:42:22.492253 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8ba2c1a3-0d05-484a-864f-febbb1fa63c7", APIVersion:"v1", ResourceVersion:"776", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-111858_3839e1ed-4666-4020-8955-9b76f2f8a69e became leader
I0730 03:42:22.592849 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-111858_3839e1ed-4666-4020-8955-9b76f2f8a69e!
==> storage-provisioner [5805a5344565] <==
I0730 03:41:20.847326 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0730 03:41:50.851688 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-111858 -n old-k8s-version-111858
helpers_test.go:261: (dbg) Run: kubectl --context old-k8s-version-111858 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-42nxj dashboard-metrics-scraper-8d5bb5db8-vrtzs
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context old-k8s-version-111858 describe pod metrics-server-9975d5f86-42nxj dashboard-metrics-scraper-8d5bb5db8-vrtzs
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-111858 describe pod metrics-server-9975d5f86-42nxj dashboard-metrics-scraper-8d5bb5db8-vrtzs: exit status 1 (92.341447ms)
** stderr **
Error from server (NotFound): pods "metrics-server-9975d5f86-42nxj" not found
Error from server (NotFound): pods "dashboard-metrics-scraper-8d5bb5db8-vrtzs" not found
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-111858 describe pod metrics-server-9975d5f86-42nxj dashboard-metrics-scraper-8d5bb5db8-vrtzs: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (375.89s)