=== RUN TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run: out/minikube-linux-arm64 start -p old-k8s-version-551944 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-551944 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=docker --kubernetes-version=v1.20.0: exit status 102 (6m15.164503586s)
-- stdout --
* [old-k8s-version-551944] minikube v1.35.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=20470
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/20470-2372/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/20470-2372/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
* Using the docker driver based on existing profile
* Starting "old-k8s-version-551944" primary control-plane node in "old-k8s-version-551944" cluster
* Pulling base image v0.0.46-1741860993-20523 ...
* Restarting existing docker container for "old-k8s-version-551944" ...
* Preparing Kubernetes v1.20.0 on Docker 28.0.1 ...
* Verifying Kubernetes components...
- Using image docker.io/kubernetesui/dashboard:v2.7.0
- Using image registry.k8s.io/echoserver:1.4
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
- Using image fake.domain/registry.k8s.io/echoserver:1.4
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-551944 addons enable metrics-server
* Enabled addons: storage-provisioner, dashboard, default-storageclass, metrics-server
-- /stdout --
** stderr **
I0329 17:14:08.450986 322664 out.go:345] Setting OutFile to fd 1 ...
I0329 17:14:08.451842 322664 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0329 17:14:08.451867 322664 out.go:358] Setting ErrFile to fd 2...
I0329 17:14:08.451887 322664 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0329 17:14:08.452162 322664 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20470-2372/.minikube/bin
I0329 17:14:08.454585 322664 out.go:352] Setting JSON to false
I0329 17:14:08.455771 322664 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7000,"bootTime":1743261449,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1080-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
I0329 17:14:08.456274 322664 start.go:139] virtualization:
I0329 17:14:08.459980 322664 out.go:177] * [old-k8s-version-551944] minikube v1.35.0 on Ubuntu 20.04 (arm64)
I0329 17:14:08.465194 322664 out.go:177] - MINIKUBE_LOCATION=20470
I0329 17:14:08.465259 322664 notify.go:220] Checking for updates...
I0329 17:14:08.474550 322664 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0329 17:14:08.477874 322664 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20470-2372/kubeconfig
I0329 17:14:08.480858 322664 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20470-2372/.minikube
I0329 17:14:08.483791 322664 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0329 17:14:08.486892 322664 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0329 17:14:08.490613 322664 config.go:182] Loaded profile config "old-k8s-version-551944": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
I0329 17:14:08.494371 322664 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
I0329 17:14:08.497321 322664 driver.go:394] Setting default libvirt URI to qemu:///system
I0329 17:14:08.563143 322664 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
I0329 17:14:08.563265 322664 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0329 17:14:08.737153 322664 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2025-03-29 17:14:08.72256868 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1080-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:753481ec61c7c8955a23d6ff7bc8e4daed455734 Expected:753481ec61c7c8955a23d6ff7bc8e4daed455734} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0329 17:14:08.737261 322664 docker.go:318] overlay module found
I0329 17:14:08.741268 322664 out.go:177] * Using the docker driver based on existing profile
I0329 17:14:08.744165 322664 start.go:297] selected driver: docker
I0329 17:14:08.744183 322664 start.go:901] validating driver "docker" against &{Name:old-k8s-version-551944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-551944 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0329 17:14:08.744270 322664 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0329 17:14:08.744963 322664 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0329 17:14:08.984751 322664 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2025-03-29 17:14:08.968267662 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1080-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:753481ec61c7c8955a23d6ff7bc8e4daed455734 Expected:753481ec61c7c8955a23d6ff7bc8e4daed455734} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0329 17:14:08.985156 322664 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0329 17:14:08.985193 322664 cni.go:84] Creating CNI manager for ""
I0329 17:14:08.985271 322664 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I0329 17:14:08.985330 322664 start.go:340] cluster config:
{Name:old-k8s-version-551944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-551944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0329 17:14:08.989505 322664 out.go:177] * Starting "old-k8s-version-551944" primary control-plane node in "old-k8s-version-551944" cluster
I0329 17:14:08.993271 322664 cache.go:121] Beginning downloading kic base image for docker with docker
I0329 17:14:08.996939 322664 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
I0329 17:14:09.000522 322664 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0329 17:14:09.000604 322664 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20470-2372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
I0329 17:14:09.000620 322664 cache.go:56] Caching tarball of preloaded images
I0329 17:14:09.000738 322664 preload.go:172] Found /home/jenkins/minikube-integration/20470-2372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0329 17:14:09.000761 322664 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
I0329 17:14:09.000904 322664 profile.go:143] Saving config to /home/jenkins/minikube-integration/20470-2372/.minikube/profiles/old-k8s-version-551944/config.json ...
I0329 17:14:09.001156 322664 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
I0329 17:14:09.049899 322664 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
I0329 17:14:09.049982 322664 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
I0329 17:14:09.050024 322664 cache.go:230] Successfully downloaded all kic artifacts
I0329 17:14:09.050074 322664 start.go:360] acquireMachinesLock for old-k8s-version-551944: {Name:mk4de33acca635fb84a8179b69f5f6c9a9ca2798 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0329 17:14:09.050168 322664 start.go:364] duration metric: took 53.835µs to acquireMachinesLock for "old-k8s-version-551944"
I0329 17:14:09.050226 322664 start.go:96] Skipping create...Using existing machine configuration
I0329 17:14:09.050249 322664 fix.go:54] fixHost starting:
I0329 17:14:09.050596 322664 cli_runner.go:164] Run: docker container inspect old-k8s-version-551944 --format={{.State.Status}}
I0329 17:14:09.113157 322664 fix.go:112] recreateIfNeeded on old-k8s-version-551944: state=Stopped err=<nil>
W0329 17:14:09.113185 322664 fix.go:138] unexpected machine state, will restart: <nil>
I0329 17:14:09.117903 322664 out.go:177] * Restarting existing docker container for "old-k8s-version-551944" ...
I0329 17:14:09.122511 322664 cli_runner.go:164] Run: docker start old-k8s-version-551944
I0329 17:14:09.625248 322664 cli_runner.go:164] Run: docker container inspect old-k8s-version-551944 --format={{.State.Status}}
I0329 17:14:09.670669 322664 kic.go:430] container "old-k8s-version-551944" state is running.
I0329 17:14:09.671088 322664 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-551944
I0329 17:14:09.708878 322664 profile.go:143] Saving config to /home/jenkins/minikube-integration/20470-2372/.minikube/profiles/old-k8s-version-551944/config.json ...
I0329 17:14:09.713444 322664 machine.go:93] provisionDockerMachine start ...
I0329 17:14:09.713548 322664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-551944
I0329 17:14:09.746928 322664 main.go:141] libmachine: Using SSH client type: native
I0329 17:14:09.747252 322664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33079 <nil> <nil>}
I0329 17:14:09.747269 322664 main.go:141] libmachine: About to run SSH command:
hostname
I0329 17:14:09.747919 322664 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I0329 17:14:12.886168 322664 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-551944
I0329 17:14:12.886198 322664 ubuntu.go:169] provisioning hostname "old-k8s-version-551944"
I0329 17:14:12.886258 322664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-551944
I0329 17:14:12.908188 322664 main.go:141] libmachine: Using SSH client type: native
I0329 17:14:12.908501 322664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33079 <nil> <nil>}
I0329 17:14:12.908512 322664 main.go:141] libmachine: About to run SSH command:
sudo hostname old-k8s-version-551944 && echo "old-k8s-version-551944" | sudo tee /etc/hostname
I0329 17:14:13.051285 322664 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-551944
I0329 17:14:13.051427 322664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-551944
I0329 17:14:13.086073 322664 main.go:141] libmachine: Using SSH client type: native
I0329 17:14:13.086379 322664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33079 <nil> <nil>}
I0329 17:14:13.086397 322664 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-551944' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-551944/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-551944' | sudo tee -a /etc/hosts;
fi
fi
I0329 17:14:13.230395 322664 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0329 17:14:13.230469 322664 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20470-2372/.minikube CaCertPath:/home/jenkins/minikube-integration/20470-2372/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20470-2372/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20470-2372/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20470-2372/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20470-2372/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20470-2372/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20470-2372/.minikube}
I0329 17:14:13.230512 322664 ubuntu.go:177] setting up certificates
I0329 17:14:13.230594 322664 provision.go:84] configureAuth start
I0329 17:14:13.230683 322664 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-551944
I0329 17:14:13.253873 322664 provision.go:143] copyHostCerts
I0329 17:14:13.253936 322664 exec_runner.go:144] found /home/jenkins/minikube-integration/20470-2372/.minikube/ca.pem, removing ...
I0329 17:14:13.253958 322664 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20470-2372/.minikube/ca.pem
I0329 17:14:13.254049 322664 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20470-2372/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20470-2372/.minikube/ca.pem (1082 bytes)
I0329 17:14:13.254152 322664 exec_runner.go:144] found /home/jenkins/minikube-integration/20470-2372/.minikube/cert.pem, removing ...
I0329 17:14:13.254164 322664 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20470-2372/.minikube/cert.pem
I0329 17:14:13.254193 322664 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20470-2372/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20470-2372/.minikube/cert.pem (1123 bytes)
I0329 17:14:13.254253 322664 exec_runner.go:144] found /home/jenkins/minikube-integration/20470-2372/.minikube/key.pem, removing ...
I0329 17:14:13.254260 322664 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20470-2372/.minikube/key.pem
I0329 17:14:13.254284 322664 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20470-2372/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20470-2372/.minikube/key.pem (1679 bytes)
I0329 17:14:13.254340 322664 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20470-2372/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20470-2372/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20470-2372/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-551944 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-551944]
I0329 17:14:14.781668 322664 provision.go:177] copyRemoteCerts
I0329 17:14:14.781735 322664 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0329 17:14:14.781775 322664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-551944
I0329 17:14:14.802278 322664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/20470-2372/.minikube/machines/old-k8s-version-551944/id_rsa Username:docker}
I0329 17:14:14.896901 322664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2372/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I0329 17:14:14.929146 322664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2372/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0329 17:14:14.961140 322664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2372/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0329 17:14:14.995805 322664 provision.go:87] duration metric: took 1.765184262s to configureAuth
I0329 17:14:14.995833 322664 ubuntu.go:193] setting minikube options for container-runtime
I0329 17:14:14.996031 322664 config.go:182] Loaded profile config "old-k8s-version-551944": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
I0329 17:14:14.996098 322664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-551944
I0329 17:14:15.020538 322664 main.go:141] libmachine: Using SSH client type: native
I0329 17:14:15.020867 322664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33079 <nil> <nil>}
I0329 17:14:15.020879 322664 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0329 17:14:15.189792 322664 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0329 17:14:15.189826 322664 ubuntu.go:71] root file system type: overlay
I0329 17:14:15.189936 322664 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0329 17:14:15.190009 322664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-551944
I0329 17:14:15.220285 322664 main.go:141] libmachine: Using SSH client type: native
I0329 17:14:15.220590 322664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33079 <nil> <nil>}
I0329 17:14:15.220664 322664 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0329 17:14:15.372690 322664 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0329 17:14:15.372870 322664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-551944
I0329 17:14:15.397680 322664 main.go:141] libmachine: Using SSH client type: native
I0329 17:14:15.398085 322664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33079 <nil> <nil>}
I0329 17:14:15.398112 322664 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0329 17:14:15.536101 322664 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0329 17:14:15.536141 322664 machine.go:96] duration metric: took 5.822658179s to provisionDockerMachine
I0329 17:14:15.536153 322664 start.go:293] postStartSetup for "old-k8s-version-551944" (driver="docker")
I0329 17:14:15.536165 322664 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0329 17:14:15.536227 322664 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0329 17:14:15.536269 322664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-551944
I0329 17:14:15.558315 322664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/20470-2372/.minikube/machines/old-k8s-version-551944/id_rsa Username:docker}
I0329 17:14:15.652354 322664 ssh_runner.go:195] Run: cat /etc/os-release
I0329 17:14:15.655889 322664 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0329 17:14:15.655921 322664 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0329 17:14:15.655931 322664 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0329 17:14:15.655938 322664 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0329 17:14:15.655948 322664 filesync.go:126] Scanning /home/jenkins/minikube-integration/20470-2372/.minikube/addons for local assets ...
I0329 17:14:15.655998 322664 filesync.go:126] Scanning /home/jenkins/minikube-integration/20470-2372/.minikube/files for local assets ...
I0329 17:14:15.656080 322664 filesync.go:149] local asset: /home/jenkins/minikube-integration/20470-2372/.minikube/files/etc/ssl/certs/77082.pem -> 77082.pem in /etc/ssl/certs
I0329 17:14:15.656177 322664 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0329 17:14:15.665223 322664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2372/.minikube/files/etc/ssl/certs/77082.pem --> /etc/ssl/certs/77082.pem (1708 bytes)
I0329 17:14:15.693736 322664 start.go:296] duration metric: took 157.567307ms for postStartSetup
I0329 17:14:15.693826 322664 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0329 17:14:15.693882 322664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-551944
I0329 17:14:15.714003 322664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/20470-2372/.minikube/machines/old-k8s-version-551944/id_rsa Username:docker}
I0329 17:14:15.803378 322664 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0329 17:14:15.808793 322664 fix.go:56] duration metric: took 6.758540809s for fixHost
I0329 17:14:15.808815 322664 start.go:83] releasing machines lock for "old-k8s-version-551944", held for 6.758602545s
I0329 17:14:15.808895 322664 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-551944
I0329 17:14:15.827491 322664 ssh_runner.go:195] Run: cat /version.json
I0329 17:14:15.827545 322664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-551944
I0329 17:14:15.827791 322664 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0329 17:14:15.827854 322664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-551944
I0329 17:14:15.858014 322664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/20470-2372/.minikube/machines/old-k8s-version-551944/id_rsa Username:docker}
I0329 17:14:15.872109 322664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/20470-2372/.minikube/machines/old-k8s-version-551944/id_rsa Username:docker}
I0329 17:14:15.968601 322664 ssh_runner.go:195] Run: systemctl --version
I0329 17:14:16.134983 322664 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0329 17:14:16.139600 322664 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0329 17:14:16.158151 322664 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0329 17:14:16.158236 322664 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
I0329 17:14:16.175454 322664 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
I0329 17:14:16.192908 322664 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0329 17:14:16.192934 322664 start.go:498] detecting cgroup driver to use...
I0329 17:14:16.192965 322664 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0329 17:14:16.193065 322664 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0329 17:14:16.210706 322664 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I0329 17:14:16.220907 322664 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0329 17:14:16.231216 322664 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0329 17:14:16.231296 322664 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0329 17:14:16.241432 322664 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0329 17:14:16.251620 322664 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0329 17:14:16.261575 322664 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0329 17:14:16.271659 322664 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0329 17:14:16.280977 322664 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0329 17:14:16.291089 322664 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0329 17:14:16.300252 322664 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0329 17:14:16.309635 322664 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0329 17:14:16.415211 322664 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0329 17:14:16.529405 322664 start.go:498] detecting cgroup driver to use...
I0329 17:14:16.529455 322664 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0329 17:14:16.529511 322664 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0329 17:14:16.560761 322664 cruntime.go:279] skipping containerd shutdown because we are bound to it
I0329 17:14:16.560828 322664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0329 17:14:16.583078 322664 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0329 17:14:16.607430 322664 ssh_runner.go:195] Run: which cri-dockerd
I0329 17:14:16.616511 322664 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0329 17:14:16.632375 322664 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0329 17:14:16.669983 322664 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0329 17:14:16.857048 322664 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0329 17:14:16.998258 322664 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0329 17:14:16.998370 322664 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0329 17:14:17.032798 322664 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0329 17:14:17.174062 322664 ssh_runner.go:195] Run: sudo systemctl restart docker
I0329 17:14:17.853815 322664 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0329 17:14:17.890277 322664 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0329 17:14:17.929863 322664 out.go:235] * Preparing Kubernetes v1.20.0 on Docker 28.0.1 ...
I0329 17:14:17.929951 322664 cli_runner.go:164] Run: docker network inspect old-k8s-version-551944 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0329 17:14:17.952204 322664 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I0329 17:14:17.957020 322664 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0329 17:14:17.969875 322664 kubeadm.go:883] updating cluster {Name:old-k8s-version-551944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-551944 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkin
s:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0329 17:14:17.970006 322664 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0329 17:14:17.970069 322664 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0329 17:14:17.996769 322664 docker.go:689] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/kube-proxy:v1.20.0
registry.k8s.io/kube-proxy:v1.20.0
registry.k8s.io/kube-apiserver:v1.20.0
k8s.gcr.io/kube-apiserver:v1.20.0
k8s.gcr.io/kube-controller-manager:v1.20.0
registry.k8s.io/kube-controller-manager:v1.20.0
k8s.gcr.io/kube-scheduler:v1.20.0
registry.k8s.io/kube-scheduler:v1.20.0
k8s.gcr.io/etcd:3.4.13-0
registry.k8s.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
registry.k8s.io/coredns:1.7.0
k8s.gcr.io/pause:3.2
registry.k8s.io/pause:3.2
gcr.io/k8s-minikube/busybox:1.28.4-glibc
-- /stdout --
I0329 17:14:17.996795 322664 docker.go:619] Images already preloaded, skipping extraction
I0329 17:14:17.996878 322664 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0329 17:14:18.020188 322664 docker.go:689] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/kube-proxy:v1.20.0
registry.k8s.io/kube-proxy:v1.20.0
registry.k8s.io/kube-controller-manager:v1.20.0
k8s.gcr.io/kube-controller-manager:v1.20.0
k8s.gcr.io/kube-apiserver:v1.20.0
registry.k8s.io/kube-apiserver:v1.20.0
k8s.gcr.io/kube-scheduler:v1.20.0
registry.k8s.io/kube-scheduler:v1.20.0
registry.k8s.io/etcd:3.4.13-0
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
registry.k8s.io/coredns:1.7.0
k8s.gcr.io/pause:3.2
registry.k8s.io/pause:3.2
gcr.io/k8s-minikube/busybox:1.28.4-glibc
-- /stdout --
I0329 17:14:18.020212 322664 cache_images.go:84] Images are preloaded, skipping loading
I0329 17:14:18.020223 322664 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 docker true true} ...
I0329 17:14:18.020336 322664 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-551944 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-551944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0329 17:14:18.020411 322664 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0329 17:14:18.088706 322664 cni.go:84] Creating CNI manager for ""
I0329 17:14:18.088735 322664 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I0329 17:14:18.088744 322664 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0329 17:14:18.088761 322664 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-551944 NodeName:old-k8s-version-551944 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0329 17:14:18.088894 322664 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "old-k8s-version-551944"
kubeletExtraArgs:
node-ip: 192.168.76.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0329 17:14:18.088976 322664 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I0329 17:14:18.101042 322664 binaries.go:44] Found k8s binaries, skipping transfer
I0329 17:14:18.101125 322664 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0329 17:14:18.110971 322664 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
I0329 17:14:18.131029 322664 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0329 17:14:18.150484 322664 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2118 bytes)
I0329 17:14:18.169697 322664 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I0329 17:14:18.173248 322664 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0329 17:14:18.183974 322664 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0329 17:14:18.284887 322664 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0329 17:14:18.300869 322664 certs.go:68] Setting up /home/jenkins/minikube-integration/20470-2372/.minikube/profiles/old-k8s-version-551944 for IP: 192.168.76.2
I0329 17:14:18.300891 322664 certs.go:194] generating shared ca certs ...
I0329 17:14:18.300906 322664 certs.go:226] acquiring lock for ca certs: {Name:mk18223e837bc0e1911104cde7b402d873f6e6f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0329 17:14:18.301046 322664 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20470-2372/.minikube/ca.key
I0329 17:14:18.301097 322664 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20470-2372/.minikube/proxy-client-ca.key
I0329 17:14:18.301107 322664 certs.go:256] generating profile certs ...
I0329 17:14:18.301192 322664 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20470-2372/.minikube/profiles/old-k8s-version-551944/client.key
I0329 17:14:18.301250 322664 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20470-2372/.minikube/profiles/old-k8s-version-551944/apiserver.key.67083718
I0329 17:14:18.301296 322664 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20470-2372/.minikube/profiles/old-k8s-version-551944/proxy-client.key
I0329 17:14:18.301403 322664 certs.go:484] found cert: /home/jenkins/minikube-integration/20470-2372/.minikube/certs/7708.pem (1338 bytes)
W0329 17:14:18.301437 322664 certs.go:480] ignoring /home/jenkins/minikube-integration/20470-2372/.minikube/certs/7708_empty.pem, impossibly tiny 0 bytes
I0329 17:14:18.301450 322664 certs.go:484] found cert: /home/jenkins/minikube-integration/20470-2372/.minikube/certs/ca-key.pem (1679 bytes)
I0329 17:14:18.301478 322664 certs.go:484] found cert: /home/jenkins/minikube-integration/20470-2372/.minikube/certs/ca.pem (1082 bytes)
I0329 17:14:18.301503 322664 certs.go:484] found cert: /home/jenkins/minikube-integration/20470-2372/.minikube/certs/cert.pem (1123 bytes)
I0329 17:14:18.301540 322664 certs.go:484] found cert: /home/jenkins/minikube-integration/20470-2372/.minikube/certs/key.pem (1679 bytes)
I0329 17:14:18.301587 322664 certs.go:484] found cert: /home/jenkins/minikube-integration/20470-2372/.minikube/files/etc/ssl/certs/77082.pem (1708 bytes)
I0329 17:14:18.302179 322664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2372/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0329 17:14:18.343349 322664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2372/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0329 17:14:18.399703 322664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2372/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0329 17:14:18.438099 322664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2372/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0329 17:14:18.496731 322664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2372/.minikube/profiles/old-k8s-version-551944/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
I0329 17:14:18.556214 322664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2372/.minikube/profiles/old-k8s-version-551944/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0329 17:14:18.612067 322664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2372/.minikube/profiles/old-k8s-version-551944/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0329 17:14:18.655170 322664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2372/.minikube/profiles/old-k8s-version-551944/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0329 17:14:18.720249 322664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2372/.minikube/certs/7708.pem --> /usr/share/ca-certificates/7708.pem (1338 bytes)
I0329 17:14:18.772559 322664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2372/.minikube/files/etc/ssl/certs/77082.pem --> /usr/share/ca-certificates/77082.pem (1708 bytes)
I0329 17:14:18.798038 322664 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2372/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0329 17:14:18.825752 322664 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0329 17:14:18.845310 322664 ssh_runner.go:195] Run: openssl version
I0329 17:14:18.851268 322664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0329 17:14:18.861356 322664 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0329 17:14:18.865304 322664 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 29 16:18 /usr/share/ca-certificates/minikubeCA.pem
I0329 17:14:18.865413 322664 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0329 17:14:18.872541 322664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0329 17:14:18.882969 322664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7708.pem && ln -fs /usr/share/ca-certificates/7708.pem /etc/ssl/certs/7708.pem"
I0329 17:14:18.892441 322664 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7708.pem
I0329 17:14:18.896365 322664 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 29 16:26 /usr/share/ca-certificates/7708.pem
I0329 17:14:18.896479 322664 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7708.pem
I0329 17:14:18.903733 322664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7708.pem /etc/ssl/certs/51391683.0"
I0329 17:14:18.913257 322664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77082.pem && ln -fs /usr/share/ca-certificates/77082.pem /etc/ssl/certs/77082.pem"
I0329 17:14:18.923072 322664 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77082.pem
I0329 17:14:18.927030 322664 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 29 16:26 /usr/share/ca-certificates/77082.pem
I0329 17:14:18.927150 322664 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77082.pem
I0329 17:14:18.934074 322664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/77082.pem /etc/ssl/certs/3ec20f2e.0"
I0329 17:14:18.943664 322664 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0329 17:14:18.947505 322664 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0329 17:14:18.954464 322664 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0329 17:14:18.961649 322664 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0329 17:14:18.968878 322664 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0329 17:14:18.976209 322664 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0329 17:14:18.983516 322664 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0329 17:14:18.990677 322664 kubeadm.go:392] StartCluster: {Name:old-k8s-version-551944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-551944 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0329 17:14:18.990867 322664 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0329 17:14:19.008796 322664 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0329 17:14:19.018657 322664 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0329 17:14:19.018724 322664 kubeadm.go:593] restartPrimaryControlPlane start ...
I0329 17:14:19.018808 322664 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0329 17:14:19.027835 322664 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0329 17:14:19.028351 322664 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-551944" does not appear in /home/jenkins/minikube-integration/20470-2372/kubeconfig
I0329 17:14:19.028512 322664 kubeconfig.go:62] /home/jenkins/minikube-integration/20470-2372/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-551944" cluster setting kubeconfig missing "old-k8s-version-551944" context setting]
I0329 17:14:19.028907 322664 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20470-2372/kubeconfig: {Name:mk819132f5bc8b9552e57e3c9cd2aa542e3496e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0329 17:14:19.030562 322664 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0329 17:14:19.039888 322664 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
I0329 17:14:19.039968 322664 kubeadm.go:597] duration metric: took 21.224309ms to restartPrimaryControlPlane
I0329 17:14:19.039992 322664 kubeadm.go:394] duration metric: took 49.323075ms to StartCluster
I0329 17:14:19.040036 322664 settings.go:142] acquiring lock: {Name:mk6563965e757ccb1b57c1e398fe3fea4dfda4e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0329 17:14:19.040125 322664 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20470-2372/kubeconfig
I0329 17:14:19.040850 322664 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20470-2372/kubeconfig: {Name:mk819132f5bc8b9552e57e3c9cd2aa542e3496e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0329 17:14:19.041112 322664 start.go:238] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I0329 17:14:19.041482 322664 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0329 17:14:19.041575 322664 config.go:182] Loaded profile config "old-k8s-version-551944": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
I0329 17:14:19.041610 322664 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-551944"
I0329 17:14:19.041642 322664 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-551944"
I0329 17:14:19.041660 322664 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-551944"
I0329 17:14:19.041681 322664 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-551944"
W0329 17:14:19.041703 322664 addons.go:247] addon storage-provisioner should already be in state true
I0329 17:14:19.041757 322664 host.go:66] Checking if "old-k8s-version-551944" exists ...
I0329 17:14:19.041937 322664 cli_runner.go:164] Run: docker container inspect old-k8s-version-551944 --format={{.State.Status}}
I0329 17:14:19.042500 322664 cli_runner.go:164] Run: docker container inspect old-k8s-version-551944 --format={{.State.Status}}
I0329 17:14:19.044052 322664 addons.go:69] Setting dashboard=true in profile "old-k8s-version-551944"
I0329 17:14:19.044073 322664 addons.go:238] Setting addon dashboard=true in "old-k8s-version-551944"
W0329 17:14:19.044080 322664 addons.go:247] addon dashboard should already be in state true
I0329 17:14:19.044104 322664 host.go:66] Checking if "old-k8s-version-551944" exists ...
I0329 17:14:19.044517 322664 cli_runner.go:164] Run: docker container inspect old-k8s-version-551944 --format={{.State.Status}}
I0329 17:14:19.045059 322664 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-551944"
I0329 17:14:19.045091 322664 addons.go:238] Setting addon metrics-server=true in "old-k8s-version-551944"
W0329 17:14:19.045099 322664 addons.go:247] addon metrics-server should already be in state true
I0329 17:14:19.045123 322664 host.go:66] Checking if "old-k8s-version-551944" exists ...
I0329 17:14:19.045578 322664 cli_runner.go:164] Run: docker container inspect old-k8s-version-551944 --format={{.State.Status}}
I0329 17:14:19.049917 322664 out.go:177] * Verifying Kubernetes components...
I0329 17:14:19.052881 322664 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0329 17:14:19.116452 322664 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0329 17:14:19.120896 322664 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0329 17:14:19.123740 322664 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0329 17:14:19.123763 322664 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0329 17:14:19.123833 322664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-551944
I0329 17:14:19.129433 322664 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-551944"
W0329 17:14:19.129456 322664 addons.go:247] addon default-storageclass should already be in state true
I0329 17:14:19.129481 322664 host.go:66] Checking if "old-k8s-version-551944" exists ...
I0329 17:14:19.129888 322664 cli_runner.go:164] Run: docker container inspect old-k8s-version-551944 --format={{.State.Status}}
I0329 17:14:19.146248 322664 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0329 17:14:19.146370 322664 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0329 17:14:19.149192 322664 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0329 17:14:19.149216 322664 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0329 17:14:19.149281 322664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-551944
I0329 17:14:19.149497 322664 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0329 17:14:19.149505 322664 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0329 17:14:19.149539 322664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-551944
I0329 17:14:19.186773 322664 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0329 17:14:19.186793 322664 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0329 17:14:19.186855 322664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-551944
I0329 17:14:19.202709 322664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/20470-2372/.minikube/machines/old-k8s-version-551944/id_rsa Username:docker}
I0329 17:14:19.245603 322664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/20470-2372/.minikube/machines/old-k8s-version-551944/id_rsa Username:docker}
I0329 17:14:19.247062 322664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/20470-2372/.minikube/machines/old-k8s-version-551944/id_rsa Username:docker}
I0329 17:14:19.266854 322664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/20470-2372/.minikube/machines/old-k8s-version-551944/id_rsa Username:docker}
I0329 17:14:19.309696 322664 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0329 17:14:19.347292 322664 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-551944" to be "Ready" ...
I0329 17:14:19.386359 322664 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0329 17:14:19.386380 322664 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0329 17:14:19.422324 322664 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0329 17:14:19.422345 322664 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0329 17:14:19.447763 322664 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0329 17:14:19.447786 322664 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0329 17:14:19.470795 322664 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0329 17:14:19.470818 322664 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0329 17:14:19.494743 322664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0329 17:14:19.513198 322664 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0329 17:14:19.513222 322664 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0329 17:14:19.526836 322664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0329 17:14:19.533066 322664 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0329 17:14:19.533090 322664 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0329 17:14:19.557311 322664 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0329 17:14:19.557335 322664 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0329 17:14:19.587859 322664 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0329 17:14:19.587885 322664 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0329 17:14:19.618868 322664 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0329 17:14:19.618895 322664 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0329 17:14:19.655034 322664 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0329 17:14:19.655102 322664 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0329 17:14:19.681027 322664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0329 17:14:19.697536 322664 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0329 17:14:19.697559 322664 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0329 17:14:19.840655 322664 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0329 17:14:19.840678 322664 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
W0329 17:14:19.845544 322664 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:14:19.845577 322664 retry.go:31] will retry after 228.126506ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0329 17:14:19.845635 322664 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:14:19.845646 322664 retry.go:31] will retry after 345.891861ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:14:19.891074 322664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0329 17:14:19.925032 322664 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:14:19.925064 322664 retry.go:31] will retry after 347.127768ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0329 17:14:20.014444 322664 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:14:20.014480 322664 retry.go:31] will retry after 245.57105ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:14:20.074805 322664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0329 17:14:20.178547 322664 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:14:20.178627 322664 retry.go:31] will retry after 349.292711ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:14:20.191896 322664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0329 17:14:20.260235 322664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0329 17:14:20.272628 322664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0329 17:14:20.337940 322664 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:14:20.338018 322664 retry.go:31] will retry after 398.919558ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0329 17:14:20.462576 322664 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:14:20.462674 322664 retry.go:31] will retry after 422.644165ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0329 17:14:20.485011 322664 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:14:20.485087 322664 retry.go:31] will retry after 466.724934ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:14:20.528376 322664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0329 17:14:20.621547 322664 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:14:20.621632 322664 retry.go:31] will retry after 841.860287ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:14:20.737904 322664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0329 17:14:20.835740 322664 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:14:20.835770 322664 retry.go:31] will retry after 645.750512ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:14:20.885498 322664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0329 17:14:20.952817 322664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0329 17:14:20.994792 322664 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:14:20.994845 322664 retry.go:31] will retry after 297.954177ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0329 17:14:21.109799 322664 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:14:21.109873 322664 retry.go:31] will retry after 799.765064ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:14:21.293005 322664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0329 17:14:21.347862 322664 node_ready.go:53] error getting node "old-k8s-version-551944": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-551944": dial tcp 192.168.76.2:8443: connect: connection refused
W0329 17:14:21.408503 322664 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:14:21.408585 322664 retry.go:31] will retry after 434.616158ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:14:21.463852 322664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0329 17:14:21.482164 322664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0329 17:14:21.587795 322664 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:14:21.587905 322664 retry.go:31] will retry after 1.059492674s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0329 17:14:21.672780 322664 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:14:21.672823 322664 retry.go:31] will retry after 710.655239ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:14:21.844052 322664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0329 17:14:21.910080 322664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0329 17:14:21.976096 322664 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:14:21.976132 322664 retry.go:31] will retry after 1.045691326s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0329 17:14:22.239105 322664 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:14:22.239138 322664 retry.go:31] will retry after 550.88444ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:14:22.383993 322664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0329 17:14:22.520009 322664 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:14:22.520035 322664 retry.go:31] will retry after 851.790221ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:14:22.648442 322664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0329 17:14:22.790778 322664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0329 17:14:22.819920 322664 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:14:22.819948 322664 retry.go:31] will retry after 725.443461ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0329 17:14:23.014008 322664 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:14:23.014047 322664 retry.go:31] will retry after 1.075475806s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:14:23.022341 322664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0329 17:14:23.196828 322664 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:14:23.196856 322664 retry.go:31] will retry after 1.414127305s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:14:23.348595 322664 node_ready.go:53] error getting node "old-k8s-version-551944": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-551944": dial tcp 192.168.76.2:8443: connect: connection refused
I0329 17:14:23.372914 322664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0329 17:14:23.545591 322664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0329 17:14:23.735178 322664 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:14:23.735203 322664 retry.go:31] will retry after 1.136126253s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0329 17:14:23.822392 322664 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:14:23.822421 322664 retry.go:31] will retry after 2.564341602s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0329 17:14:24.090401 322664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0329 17:14:24.612132 322664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0329 17:14:24.871970 322664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0329 17:14:26.386948 322664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0329 17:14:34.348638 322664 node_ready.go:53] error getting node "old-k8s-version-551944": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-551944": net/http: TLS handshake timeout
I0329 17:14:34.918976 322664 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.828533425s)
W0329 17:14:34.919007 322664 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
Unable to connect to the server: net/http: TLS handshake timeout
I0329 17:14:34.919025 322664 retry.go:31] will retry after 2.032916788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
Unable to connect to the server: net/http: TLS handshake timeout
I0329 17:14:35.608869 322664 node_ready.go:49] node "old-k8s-version-551944" has status "Ready":"True"
I0329 17:14:35.608891 322664 node_ready.go:38] duration metric: took 16.261565814s for node "old-k8s-version-551944" to be "Ready" ...
I0329 17:14:35.608901 322664 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0329 17:14:35.904728 322664 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-b9w86" in "kube-system" namespace to be "Ready" ...
I0329 17:14:36.086680 322664 pod_ready.go:93] pod "coredns-74ff55c5b-b9w86" in "kube-system" namespace has status "Ready":"True"
I0329 17:14:36.086704 322664 pod_ready.go:82] duration metric: took 181.952199ms for pod "coredns-74ff55c5b-b9w86" in "kube-system" namespace to be "Ready" ...
I0329 17:14:36.086727 322664 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-551944" in "kube-system" namespace to be "Ready" ...
I0329 17:14:36.166803 322664 pod_ready.go:93] pod "etcd-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"True"
I0329 17:14:36.166824 322664 pod_ready.go:82] duration metric: took 80.090359ms for pod "etcd-old-k8s-version-551944" in "kube-system" namespace to be "Ready" ...
I0329 17:14:36.166837 322664 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-551944" in "kube-system" namespace to be "Ready" ...
I0329 17:14:36.952364 322664 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0329 17:14:38.188790 322664 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"False"
I0329 17:14:38.203467 322664 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (13.591289417s)
I0329 17:14:38.203730 322664 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (13.331734603s)
I0329 17:14:38.203974 322664 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.817001121s)
I0329 17:14:38.206731 322664 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-551944 addons enable metrics-server
I0329 17:14:38.603811 322664 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.651408758s)
I0329 17:14:38.603847 322664 addons.go:479] Verifying addon metrics-server=true in "old-k8s-version-551944"
I0329 17:14:38.608800 322664 out.go:177] * Enabled addons: storage-provisioner, dashboard, default-storageclass, metrics-server
I0329 17:14:38.611775 322664 addons.go:514] duration metric: took 19.570281946s for enable addons: enabled=[storage-provisioner dashboard default-storageclass metrics-server]
I0329 17:14:40.671424 322664 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"False"
I0329 17:14:42.672329 322664 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"False"
I0329 17:14:43.671681 322664 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"True"
I0329 17:14:43.671706 322664 pod_ready.go:82] duration metric: took 7.504861026s for pod "kube-apiserver-old-k8s-version-551944" in "kube-system" namespace to be "Ready" ...
I0329 17:14:43.671719 322664 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-551944" in "kube-system" namespace to be "Ready" ...
I0329 17:14:45.677854 322664 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"False"
I0329 17:14:48.177371 322664 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"False"
I0329 17:14:50.178122 322664 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"False"
I0329 17:14:52.676913 322664 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"False"
I0329 17:14:54.699336 322664 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"False"
I0329 17:14:57.177545 322664 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"False"
I0329 17:14:59.178134 322664 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"False"
I0329 17:15:01.683177 322664 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"False"
I0329 17:15:04.177586 322664 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"False"
I0329 17:15:06.177843 322664 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"False"
I0329 17:15:08.676897 322664 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"False"
I0329 17:15:10.677375 322664 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"False"
I0329 17:15:13.181637 322664 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"False"
I0329 17:15:15.677882 322664 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"False"
I0329 17:15:18.177907 322664 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"False"
I0329 17:15:20.676947 322664 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"False"
I0329 17:15:23.177211 322664 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"False"
I0329 17:15:25.177948 322664 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"False"
I0329 17:15:27.179508 322664 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"False"
I0329 17:15:29.676572 322664 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"False"
I0329 17:15:31.677836 322664 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"False"
I0329 17:15:33.680231 322664 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"False"
I0329 17:15:36.176486 322664 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"False"
I0329 17:15:38.176887 322664 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"False"
I0329 17:15:40.177584 322664 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"False"
I0329 17:15:42.191325 322664 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"False"
I0329 17:15:44.676873 322664 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"False"
I0329 17:15:46.677237 322664 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"False"
I0329 17:15:48.677819 322664 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"False"
I0329 17:15:51.177724 322664 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"False"
I0329 17:15:53.676881 322664 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"False"
I0329 17:15:56.176720 322664 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"False"
I0329 17:15:58.177924 322664 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"False"
I0329 17:16:00.181856 322664 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"False"
I0329 17:16:00.676472 322664 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"True"
I0329 17:16:00.676500 322664 pod_ready.go:82] duration metric: took 1m17.004773312s for pod "kube-controller-manager-old-k8s-version-551944" in "kube-system" namespace to be "Ready" ...
I0329 17:16:00.676512 322664 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pmp5k" in "kube-system" namespace to be "Ready" ...
I0329 17:16:00.680322 322664 pod_ready.go:93] pod "kube-proxy-pmp5k" in "kube-system" namespace has status "Ready":"True"
I0329 17:16:00.680346 322664 pod_ready.go:82] duration metric: took 3.826317ms for pod "kube-proxy-pmp5k" in "kube-system" namespace to be "Ready" ...
I0329 17:16:00.680366 322664 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-551944" in "kube-system" namespace to be "Ready" ...
I0329 17:16:00.685248 322664 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-551944" in "kube-system" namespace has status "Ready":"True"
I0329 17:16:00.685273 322664 pod_ready.go:82] duration metric: took 4.898628ms for pod "kube-scheduler-old-k8s-version-551944" in "kube-system" namespace to be "Ready" ...
I0329 17:16:00.685286 322664 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace to be "Ready" ...
I0329 17:16:02.690805 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:16:05.192457 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:16:07.690550 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:16:09.690756 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:16:12.190917 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:16:14.690026 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:16:16.690385 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:16:18.696766 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:16:21.191322 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:16:23.690622 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:16:25.690921 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:16:28.192547 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:16:30.694058 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:16:33.189984 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:16:35.190197 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:16:37.190515 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:16:39.191433 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:16:41.191488 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:16:43.690122 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:16:46.190192 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:16:48.190422 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:16:50.690736 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:16:53.190440 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:16:55.190577 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:16:57.192701 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:16:59.690422 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:17:01.692180 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:17:04.190749 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:17:06.691335 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:17:09.190435 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:17:11.190611 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:17:13.191446 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:17:15.191694 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:17:17.691289 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:17:20.190440 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:17:22.192248 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:17:24.690606 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:17:27.189926 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:17:29.191426 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:17:31.691278 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:17:34.191287 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:17:36.690291 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:17:38.690495 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:17:41.191814 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:17:43.195119 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:17:45.690677 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:17:48.190597 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:17:50.690174 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:17:52.690568 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:17:54.692403 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:17:57.191117 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:17:59.191350 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:18:01.192012 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:18:03.690970 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:18:06.190377 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:18:08.190604 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:18:10.691961 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:18:13.191258 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:18:15.690810 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:18:18.191229 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:18:20.690730 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:18:23.190144 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:18:25.191605 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:18:27.690689 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:18:29.691409 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:18:32.190516 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:18:34.191106 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:18:36.691166 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:18:39.190988 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:18:41.689962 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:18:43.690816 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:18:46.190087 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:18:48.190416 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:18:50.192024 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:18:52.690297 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:18:55.190787 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:18:57.197684 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:18:59.690165 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:19:01.690721 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:19:03.690924 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:19:06.190654 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:19:08.196553 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:19:10.690637 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:19:12.690838 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:19:15.191842 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:19:17.690397 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:19:20.190687 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:19:22.691178 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:19:24.691375 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:19:27.191382 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:19:29.691185 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:19:32.190865 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:19:34.191427 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:19:36.690927 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:19:38.690986 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:19:40.691109 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:19:42.691189 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:19:45.192181 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:19:47.689872 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:19:49.694484 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:19:52.191626 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:19:54.695477 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:19:57.190557 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:19:59.690350 322664 pod_ready.go:103] pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace has status "Ready":"False"
I0329 17:20:00.691048 322664 pod_ready.go:82] duration metric: took 4m0.005748474s for pod "metrics-server-9975d5f86-wrjnc" in "kube-system" namespace to be "Ready" ...
E0329 17:20:00.691072 322664 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I0329 17:20:00.691082 322664 pod_ready.go:39] duration metric: took 5m25.08216839s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0329 17:20:00.691099 322664 api_server.go:52] waiting for apiserver process to appear ...
I0329 17:20:00.691186 322664 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0329 17:20:00.710247 322664 logs.go:282] 2 containers: [bc8afe7816f3 9ed3257bd77f]
I0329 17:20:00.710333 322664 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0329 17:20:00.728643 322664 logs.go:282] 2 containers: [f7b7044d3d79 6c74b83fdc73]
I0329 17:20:00.728720 322664 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0329 17:20:00.747600 322664 logs.go:282] 2 containers: [0c80cbb76391 8e2a8ebfddb3]
I0329 17:20:00.747688 322664 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0329 17:20:00.774598 322664 logs.go:282] 2 containers: [99605fcde49b f2fc2725f63d]
I0329 17:20:00.774687 322664 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0329 17:20:00.793592 322664 logs.go:282] 2 containers: [0c2204880506 e74725ac4203]
I0329 17:20:00.793669 322664 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0329 17:20:00.811914 322664 logs.go:282] 2 containers: [945138d280da 492d0056000d]
I0329 17:20:00.812011 322664 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0329 17:20:00.837004 322664 logs.go:282] 0 containers: []
W0329 17:20:00.837025 322664 logs.go:284] No container was found matching "kindnet"
I0329 17:20:00.837082 322664 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0329 17:20:00.856918 322664 logs.go:282] 1 containers: [ab0ca8585388]
I0329 17:20:00.856998 322664 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0329 17:20:00.875921 322664 logs.go:282] 2 containers: [b4f0beb7b2eb 766ed8e00c6e]
I0329 17:20:00.875961 322664 logs.go:123] Gathering logs for storage-provisioner [b4f0beb7b2eb] ...
I0329 17:20:00.875973 322664 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f0beb7b2eb"
I0329 17:20:00.900512 322664 logs.go:123] Gathering logs for Docker ...
I0329 17:20:00.900553 322664 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0329 17:20:00.926212 322664 logs.go:123] Gathering logs for dmesg ...
I0329 17:20:00.926244 322664 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0329 17:20:00.948382 322664 logs.go:123] Gathering logs for etcd [6c74b83fdc73] ...
I0329 17:20:00.948455 322664 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c74b83fdc73"
I0329 17:20:00.974827 322664 logs.go:123] Gathering logs for kube-proxy [e74725ac4203] ...
I0329 17:20:00.974856 322664 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e74725ac4203"
I0329 17:20:01.007160 322664 logs.go:123] Gathering logs for storage-provisioner [766ed8e00c6e] ...
I0329 17:20:01.007188 322664 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 766ed8e00c6e"
I0329 17:20:01.029209 322664 logs.go:123] Gathering logs for kubelet ...
I0329 17:20:01.029235 322664 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0329 17:20:01.085115 322664 logs.go:138] Found kubelet problem: Mar 29 17:14:35 old-k8s-version-551944 kubelet[1457]: E0329 17:14:35.461825 1457 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-551944" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-551944' and this object
W0329 17:20:01.085379 322664 logs.go:138] Found kubelet problem: Mar 29 17:14:35 old-k8s-version-551944 kubelet[1457]: E0329 17:14:35.461911 1457 reflector.go:138] object-"kube-system"/"coredns-token-nj6cz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-nj6cz" is forbidden: User "system:node:old-k8s-version-551944" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-551944' and this object
W0329 17:20:01.088098 322664 logs.go:138] Found kubelet problem: Mar 29 17:14:35 old-k8s-version-551944 kubelet[1457]: E0329 17:14:35.649071 1457 reflector.go:138] object-"kube-system"/"storage-provisioner-token-76lpq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-76lpq" is forbidden: User "system:node:old-k8s-version-551944" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-551944' and this object
W0329 17:20:01.088325 322664 logs.go:138] Found kubelet problem: Mar 29 17:14:35 old-k8s-version-551944 kubelet[1457]: E0329 17:14:35.671451 1457 reflector.go:138] object-"kube-system"/"kube-proxy-token-gkgvj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-gkgvj" is forbidden: User "system:node:old-k8s-version-551944" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-551944' and this object
W0329 17:20:01.088531 322664 logs.go:138] Found kubelet problem: Mar 29 17:14:35 old-k8s-version-551944 kubelet[1457]: E0329 17:14:35.671721 1457 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-551944" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-551944' and this object
W0329 17:20:01.088761 322664 logs.go:138] Found kubelet problem: Mar 29 17:14:35 old-k8s-version-551944 kubelet[1457]: E0329 17:14:35.671877 1457 reflector.go:138] object-"kube-system"/"metrics-server-token-mxnxc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-mxnxc" is forbidden: User "system:node:old-k8s-version-551944" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-551944' and this object
W0329 17:20:01.095643 322664 logs.go:138] Found kubelet problem: Mar 29 17:14:38 old-k8s-version-551944 kubelet[1457]: E0329 17:14:38.706614 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0329 17:20:01.096321 322664 logs.go:138] Found kubelet problem: Mar 29 17:14:39 old-k8s-version-551944 kubelet[1457]: E0329 17:14:39.084575 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.099103 322664 logs.go:138] Found kubelet problem: Mar 29 17:14:53 old-k8s-version-551944 kubelet[1457]: E0329 17:14:53.280979 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0329 17:20:01.103240 322664 logs.go:138] Found kubelet problem: Mar 29 17:15:01 old-k8s-version-551944 kubelet[1457]: E0329 17:15:01.279754 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0329 17:20:01.103618 322664 logs.go:138] Found kubelet problem: Mar 29 17:15:01 old-k8s-version-551944 kubelet[1457]: E0329 17:15:01.556696 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.104135 322664 logs.go:138] Found kubelet problem: Mar 29 17:15:07 old-k8s-version-551944 kubelet[1457]: E0329 17:15:07.259823 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.104579 322664 logs.go:138] Found kubelet problem: Mar 29 17:15:09 old-k8s-version-551944 kubelet[1457]: E0329 17:15:09.632023 1457 pod_workers.go:191] Error syncing pod 8164a243-0648-4dc5-9fb5-f5e619d89b1b ("storage-provisioner_kube-system(8164a243-0648-4dc5-9fb5-f5e619d89b1b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8164a243-0648-4dc5-9fb5-f5e619d89b1b)"
W0329 17:20:01.107154 322664 logs.go:138] Found kubelet problem: Mar 29 17:15:13 old-k8s-version-551944 kubelet[1457]: E0329 17:15:13.723140 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0329 17:20:01.109231 322664 logs.go:138] Found kubelet problem: Mar 29 17:15:22 old-k8s-version-551944 kubelet[1457]: E0329 17:15:22.329063 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0329 17:20:01.109747 322664 logs.go:138] Found kubelet problem: Mar 29 17:15:26 old-k8s-version-551944 kubelet[1457]: E0329 17:15:26.260588 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.109932 322664 logs.go:138] Found kubelet problem: Mar 29 17:15:33 old-k8s-version-551944 kubelet[1457]: E0329 17:15:33.270985 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.112165 322664 logs.go:138] Found kubelet problem: Mar 29 17:15:38 old-k8s-version-551944 kubelet[1457]: E0329 17:15:38.768840 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0329 17:20:01.112352 322664 logs.go:138] Found kubelet problem: Mar 29 17:15:45 old-k8s-version-551944 kubelet[1457]: E0329 17:15:45.261466 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.112549 322664 logs.go:138] Found kubelet problem: Mar 29 17:15:51 old-k8s-version-551944 kubelet[1457]: E0329 17:15:51.260320 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.112733 322664 logs.go:138] Found kubelet problem: Mar 29 17:15:59 old-k8s-version-551944 kubelet[1457]: E0329 17:15:59.260113 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.112942 322664 logs.go:138] Found kubelet problem: Mar 29 17:16:06 old-k8s-version-551944 kubelet[1457]: E0329 17:16:06.284365 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.115034 322664 logs.go:138] Found kubelet problem: Mar 29 17:16:12 old-k8s-version-551944 kubelet[1457]: E0329 17:16:12.312627 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0329 17:20:01.117261 322664 logs.go:138] Found kubelet problem: Mar 29 17:16:20 old-k8s-version-551944 kubelet[1457]: E0329 17:16:20.941638 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0329 17:20:01.117445 322664 logs.go:138] Found kubelet problem: Mar 29 17:16:24 old-k8s-version-551944 kubelet[1457]: E0329 17:16:24.260858 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.117642 322664 logs.go:138] Found kubelet problem: Mar 29 17:16:31 old-k8s-version-551944 kubelet[1457]: E0329 17:16:31.260176 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.117825 322664 logs.go:138] Found kubelet problem: Mar 29 17:16:39 old-k8s-version-551944 kubelet[1457]: E0329 17:16:39.265809 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.118022 322664 logs.go:138] Found kubelet problem: Mar 29 17:16:43 old-k8s-version-551944 kubelet[1457]: E0329 17:16:43.263922 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.118207 322664 logs.go:138] Found kubelet problem: Mar 29 17:16:50 old-k8s-version-551944 kubelet[1457]: E0329 17:16:50.260272 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.118404 322664 logs.go:138] Found kubelet problem: Mar 29 17:16:55 old-k8s-version-551944 kubelet[1457]: E0329 17:16:55.260304 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.118595 322664 logs.go:138] Found kubelet problem: Mar 29 17:17:01 old-k8s-version-551944 kubelet[1457]: E0329 17:17:01.260520 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.118797 322664 logs.go:138] Found kubelet problem: Mar 29 17:17:08 old-k8s-version-551944 kubelet[1457]: E0329 17:17:08.277299 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.118987 322664 logs.go:138] Found kubelet problem: Mar 29 17:17:15 old-k8s-version-551944 kubelet[1457]: E0329 17:17:15.260289 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.119193 322664 logs.go:138] Found kubelet problem: Mar 29 17:17:20 old-k8s-version-551944 kubelet[1457]: E0329 17:17:20.260684 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.119378 322664 logs.go:138] Found kubelet problem: Mar 29 17:17:29 old-k8s-version-551944 kubelet[1457]: E0329 17:17:29.260064 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.119575 322664 logs.go:138] Found kubelet problem: Mar 29 17:17:34 old-k8s-version-551944 kubelet[1457]: E0329 17:17:34.264202 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.121813 322664 logs.go:138] Found kubelet problem: Mar 29 17:17:40 old-k8s-version-551944 kubelet[1457]: E0329 17:17:40.287091 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0329 17:20:01.124077 322664 logs.go:138] Found kubelet problem: Mar 29 17:17:46 old-k8s-version-551944 kubelet[1457]: E0329 17:17:46.831876 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0329 17:20:01.124266 322664 logs.go:138] Found kubelet problem: Mar 29 17:17:53 old-k8s-version-551944 kubelet[1457]: E0329 17:17:53.260430 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.124467 322664 logs.go:138] Found kubelet problem: Mar 29 17:17:59 old-k8s-version-551944 kubelet[1457]: E0329 17:17:59.260070 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.124666 322664 logs.go:138] Found kubelet problem: Mar 29 17:18:04 old-k8s-version-551944 kubelet[1457]: E0329 17:18:04.260476 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.124864 322664 logs.go:138] Found kubelet problem: Mar 29 17:18:13 old-k8s-version-551944 kubelet[1457]: E0329 17:18:13.266456 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.125048 322664 logs.go:138] Found kubelet problem: Mar 29 17:18:15 old-k8s-version-551944 kubelet[1457]: E0329 17:18:15.260277 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.125245 322664 logs.go:138] Found kubelet problem: Mar 29 17:18:25 old-k8s-version-551944 kubelet[1457]: E0329 17:18:25.267475 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.125433 322664 logs.go:138] Found kubelet problem: Mar 29 17:18:27 old-k8s-version-551944 kubelet[1457]: E0329 17:18:27.260134 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.125632 322664 logs.go:138] Found kubelet problem: Mar 29 17:18:37 old-k8s-version-551944 kubelet[1457]: E0329 17:18:37.260145 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.125816 322664 logs.go:138] Found kubelet problem: Mar 29 17:18:42 old-k8s-version-551944 kubelet[1457]: E0329 17:18:42.261218 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.126012 322664 logs.go:138] Found kubelet problem: Mar 29 17:18:50 old-k8s-version-551944 kubelet[1457]: E0329 17:18:50.260639 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.126198 322664 logs.go:138] Found kubelet problem: Mar 29 17:18:57 old-k8s-version-551944 kubelet[1457]: E0329 17:18:57.260263 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.126401 322664 logs.go:138] Found kubelet problem: Mar 29 17:19:05 old-k8s-version-551944 kubelet[1457]: E0329 17:19:05.260268 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.126593 322664 logs.go:138] Found kubelet problem: Mar 29 17:19:09 old-k8s-version-551944 kubelet[1457]: E0329 17:19:09.260200 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.126800 322664 logs.go:138] Found kubelet problem: Mar 29 17:19:20 old-k8s-version-551944 kubelet[1457]: E0329 17:19:20.268357 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.126984 322664 logs.go:138] Found kubelet problem: Mar 29 17:19:22 old-k8s-version-551944 kubelet[1457]: E0329 17:19:22.264762 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.127184 322664 logs.go:138] Found kubelet problem: Mar 29 17:19:33 old-k8s-version-551944 kubelet[1457]: E0329 17:19:33.260820 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.127368 322664 logs.go:138] Found kubelet problem: Mar 29 17:19:37 old-k8s-version-551944 kubelet[1457]: E0329 17:19:37.260349 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.127566 322664 logs.go:138] Found kubelet problem: Mar 29 17:19:45 old-k8s-version-551944 kubelet[1457]: E0329 17:19:45.261516 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.127749 322664 logs.go:138] Found kubelet problem: Mar 29 17:19:50 old-k8s-version-551944 kubelet[1457]: E0329 17:19:50.260407 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.127946 322664 logs.go:138] Found kubelet problem: Mar 29 17:19:59 old-k8s-version-551944 kubelet[1457]: E0329 17:19:59.260075 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
I0329 17:20:01.127957 322664 logs.go:123] Gathering logs for kube-apiserver [bc8afe7816f3] ...
I0329 17:20:01.127972 322664 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8afe7816f3"
I0329 17:20:01.197564 322664 logs.go:123] Gathering logs for kube-apiserver [9ed3257bd77f] ...
I0329 17:20:01.197596 322664 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ed3257bd77f"
I0329 17:20:01.307142 322664 logs.go:123] Gathering logs for kube-proxy [0c2204880506] ...
I0329 17:20:01.307179 322664 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c2204880506"
I0329 17:20:01.334434 322664 logs.go:123] Gathering logs for kube-controller-manager [492d0056000d] ...
I0329 17:20:01.334468 322664 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 492d0056000d"
I0329 17:20:01.388526 322664 logs.go:123] Gathering logs for container status ...
I0329 17:20:01.388568 322664 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0329 17:20:01.470440 322664 logs.go:123] Gathering logs for kube-scheduler [99605fcde49b] ...
I0329 17:20:01.470472 322664 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99605fcde49b"
I0329 17:20:01.497559 322664 logs.go:123] Gathering logs for kube-scheduler [f2fc2725f63d] ...
I0329 17:20:01.497590 322664 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2fc2725f63d"
I0329 17:20:01.553372 322664 logs.go:123] Gathering logs for kube-controller-manager [945138d280da] ...
I0329 17:20:01.553405 322664 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945138d280da"
I0329 17:20:01.616920 322664 logs.go:123] Gathering logs for kubernetes-dashboard [ab0ca8585388] ...
I0329 17:20:01.616957 322664 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab0ca8585388"
I0329 17:20:01.642322 322664 logs.go:123] Gathering logs for describe nodes ...
I0329 17:20:01.642351 322664 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0329 17:20:01.832949 322664 logs.go:123] Gathering logs for etcd [f7b7044d3d79] ...
I0329 17:20:01.833030 322664 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7b7044d3d79"
I0329 17:20:01.867660 322664 logs.go:123] Gathering logs for coredns [0c80cbb76391] ...
I0329 17:20:01.867695 322664 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c80cbb76391"
I0329 17:20:01.898432 322664 logs.go:123] Gathering logs for coredns [8e2a8ebfddb3] ...
I0329 17:20:01.898497 322664 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2a8ebfddb3"
I0329 17:20:01.927808 322664 out.go:358] Setting ErrFile to fd 2...
I0329 17:20:01.927838 322664 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0329 17:20:01.927916 322664 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W0329 17:20:01.927933 322664 out.go:270] Mar 29 17:19:33 old-k8s-version-551944 kubelet[1457]: E0329 17:19:33.260820 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Mar 29 17:19:33 old-k8s-version-551944 kubelet[1457]: E0329 17:19:33.260820 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.927966 322664 out.go:270] Mar 29 17:19:37 old-k8s-version-551944 kubelet[1457]: E0329 17:19:37.260349 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 29 17:19:37 old-k8s-version-551944 kubelet[1457]: E0329 17:19:37.260349 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.927980 322664 out.go:270] Mar 29 17:19:45 old-k8s-version-551944 kubelet[1457]: E0329 17:19:45.261516 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Mar 29 17:19:45 old-k8s-version-551944 kubelet[1457]: E0329 17:19:45.261516 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.927988 322664 out.go:270] Mar 29 17:19:50 old-k8s-version-551944 kubelet[1457]: E0329 17:19:50.260407 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 29 17:19:50 old-k8s-version-551944 kubelet[1457]: E0329 17:19:50.260407 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:01.927994 322664 out.go:270] Mar 29 17:19:59 old-k8s-version-551944 kubelet[1457]: E0329 17:19:59.260075 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Mar 29 17:19:59 old-k8s-version-551944 kubelet[1457]: E0329 17:19:59.260075 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
I0329 17:20:01.927999 322664 out.go:358] Setting ErrFile to fd 2...
I0329 17:20:01.928005 322664 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0329 17:20:11.930202 322664 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0329 17:20:11.944001 322664 api_server.go:72] duration metric: took 5m52.902826982s to wait for apiserver process to appear ...
I0329 17:20:11.944023 322664 api_server.go:88] waiting for apiserver healthz status ...
I0329 17:20:11.944103 322664 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0329 17:20:11.966935 322664 logs.go:282] 2 containers: [bc8afe7816f3 9ed3257bd77f]
I0329 17:20:11.967022 322664 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0329 17:20:12.017540 322664 logs.go:282] 2 containers: [f7b7044d3d79 6c74b83fdc73]
I0329 17:20:12.017755 322664 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0329 17:20:12.051543 322664 logs.go:282] 2 containers: [0c80cbb76391 8e2a8ebfddb3]
I0329 17:20:12.051627 322664 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0329 17:20:12.085291 322664 logs.go:282] 2 containers: [99605fcde49b f2fc2725f63d]
I0329 17:20:12.085456 322664 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0329 17:20:12.110940 322664 logs.go:282] 2 containers: [0c2204880506 e74725ac4203]
I0329 17:20:12.111014 322664 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0329 17:20:12.133570 322664 logs.go:282] 2 containers: [945138d280da 492d0056000d]
I0329 17:20:12.133651 322664 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0329 17:20:12.153706 322664 logs.go:282] 0 containers: []
W0329 17:20:12.153729 322664 logs.go:284] No container was found matching "kindnet"
I0329 17:20:12.153780 322664 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0329 17:20:12.182414 322664 logs.go:282] 1 containers: [ab0ca8585388]
I0329 17:20:12.182511 322664 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0329 17:20:12.215578 322664 logs.go:282] 2 containers: [b4f0beb7b2eb 766ed8e00c6e]
I0329 17:20:12.215659 322664 logs.go:123] Gathering logs for etcd [6c74b83fdc73] ...
I0329 17:20:12.215688 322664 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c74b83fdc73"
I0329 17:20:12.249452 322664 logs.go:123] Gathering logs for kube-apiserver [bc8afe7816f3] ...
I0329 17:20:12.249531 322664 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc8afe7816f3"
I0329 17:20:12.311703 322664 logs.go:123] Gathering logs for kube-controller-manager [492d0056000d] ...
I0329 17:20:12.311740 322664 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 492d0056000d"
I0329 17:20:12.373652 322664 logs.go:123] Gathering logs for kubernetes-dashboard [ab0ca8585388] ...
I0329 17:20:12.373688 322664 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab0ca8585388"
I0329 17:20:12.406007 322664 logs.go:123] Gathering logs for describe nodes ...
I0329 17:20:12.406086 322664 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0329 17:20:12.663322 322664 logs.go:123] Gathering logs for kube-apiserver [9ed3257bd77f] ...
I0329 17:20:12.663389 322664 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ed3257bd77f"
I0329 17:20:12.779785 322664 logs.go:123] Gathering logs for coredns [0c80cbb76391] ...
I0329 17:20:12.779825 322664 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c80cbb76391"
I0329 17:20:12.805682 322664 logs.go:123] Gathering logs for coredns [8e2a8ebfddb3] ...
I0329 17:20:12.805711 322664 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e2a8ebfddb3"
I0329 17:20:12.855623 322664 logs.go:123] Gathering logs for kube-scheduler [99605fcde49b] ...
I0329 17:20:12.855665 322664 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99605fcde49b"
I0329 17:20:12.896235 322664 logs.go:123] Gathering logs for kube-scheduler [f2fc2725f63d] ...
I0329 17:20:12.896312 322664 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f2fc2725f63d"
I0329 17:20:12.937197 322664 logs.go:123] Gathering logs for kube-controller-manager [945138d280da] ...
I0329 17:20:12.937273 322664 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 945138d280da"
I0329 17:20:12.993709 322664 logs.go:123] Gathering logs for storage-provisioner [b4f0beb7b2eb] ...
I0329 17:20:12.993781 322664 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f0beb7b2eb"
I0329 17:20:13.027483 322664 logs.go:123] Gathering logs for kubelet ...
I0329 17:20:13.027522 322664 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0329 17:20:13.146794 322664 logs.go:138] Found kubelet problem: Mar 29 17:14:35 old-k8s-version-551944 kubelet[1457]: E0329 17:14:35.461825 1457 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-551944" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-551944' and this object
W0329 17:20:13.147084 322664 logs.go:138] Found kubelet problem: Mar 29 17:14:35 old-k8s-version-551944 kubelet[1457]: E0329 17:14:35.461911 1457 reflector.go:138] object-"kube-system"/"coredns-token-nj6cz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-nj6cz" is forbidden: User "system:node:old-k8s-version-551944" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-551944' and this object
W0329 17:20:13.150004 322664 logs.go:138] Found kubelet problem: Mar 29 17:14:35 old-k8s-version-551944 kubelet[1457]: E0329 17:14:35.649071 1457 reflector.go:138] object-"kube-system"/"storage-provisioner-token-76lpq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-76lpq" is forbidden: User "system:node:old-k8s-version-551944" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-551944' and this object
W0329 17:20:13.150268 322664 logs.go:138] Found kubelet problem: Mar 29 17:14:35 old-k8s-version-551944 kubelet[1457]: E0329 17:14:35.671451 1457 reflector.go:138] object-"kube-system"/"kube-proxy-token-gkgvj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-gkgvj" is forbidden: User "system:node:old-k8s-version-551944" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-551944' and this object
W0329 17:20:13.150503 322664 logs.go:138] Found kubelet problem: Mar 29 17:14:35 old-k8s-version-551944 kubelet[1457]: E0329 17:14:35.671721 1457 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-551944" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-551944' and this object
W0329 17:20:13.150855 322664 logs.go:138] Found kubelet problem: Mar 29 17:14:35 old-k8s-version-551944 kubelet[1457]: E0329 17:14:35.671877 1457 reflector.go:138] object-"kube-system"/"metrics-server-token-mxnxc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-mxnxc" is forbidden: User "system:node:old-k8s-version-551944" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-551944' and this object
W0329 17:20:13.162701 322664 logs.go:138] Found kubelet problem: Mar 29 17:14:38 old-k8s-version-551944 kubelet[1457]: E0329 17:14:38.706614 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0329 17:20:13.163475 322664 logs.go:138] Found kubelet problem: Mar 29 17:14:39 old-k8s-version-551944 kubelet[1457]: E0329 17:14:39.084575 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.166312 322664 logs.go:138] Found kubelet problem: Mar 29 17:14:53 old-k8s-version-551944 kubelet[1457]: E0329 17:14:53.280979 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0329 17:20:13.170536 322664 logs.go:138] Found kubelet problem: Mar 29 17:15:01 old-k8s-version-551944 kubelet[1457]: E0329 17:15:01.279754 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0329 17:20:13.170968 322664 logs.go:138] Found kubelet problem: Mar 29 17:15:01 old-k8s-version-551944 kubelet[1457]: E0329 17:15:01.556696 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.171515 322664 logs.go:138] Found kubelet problem: Mar 29 17:15:07 old-k8s-version-551944 kubelet[1457]: E0329 17:15:07.259823 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.171983 322664 logs.go:138] Found kubelet problem: Mar 29 17:15:09 old-k8s-version-551944 kubelet[1457]: E0329 17:15:09.632023 1457 pod_workers.go:191] Error syncing pod 8164a243-0648-4dc5-9fb5-f5e619d89b1b ("storage-provisioner_kube-system(8164a243-0648-4dc5-9fb5-f5e619d89b1b)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8164a243-0648-4dc5-9fb5-f5e619d89b1b)"
W0329 17:20:13.174659 322664 logs.go:138] Found kubelet problem: Mar 29 17:15:13 old-k8s-version-551944 kubelet[1457]: E0329 17:15:13.723140 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0329 17:20:13.176756 322664 logs.go:138] Found kubelet problem: Mar 29 17:15:22 old-k8s-version-551944 kubelet[1457]: E0329 17:15:22.329063 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0329 17:20:13.177311 322664 logs.go:138] Found kubelet problem: Mar 29 17:15:26 old-k8s-version-551944 kubelet[1457]: E0329 17:15:26.260588 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.177525 322664 logs.go:138] Found kubelet problem: Mar 29 17:15:33 old-k8s-version-551944 kubelet[1457]: E0329 17:15:33.270985 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.180353 322664 logs.go:138] Found kubelet problem: Mar 29 17:15:38 old-k8s-version-551944 kubelet[1457]: E0329 17:15:38.768840 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0329 17:20:13.180634 322664 logs.go:138] Found kubelet problem: Mar 29 17:15:45 old-k8s-version-551944 kubelet[1457]: E0329 17:15:45.261466 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.180876 322664 logs.go:138] Found kubelet problem: Mar 29 17:15:51 old-k8s-version-551944 kubelet[1457]: E0329 17:15:51.260320 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.181131 322664 logs.go:138] Found kubelet problem: Mar 29 17:15:59 old-k8s-version-551944 kubelet[1457]: E0329 17:15:59.260113 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.181435 322664 logs.go:138] Found kubelet problem: Mar 29 17:16:06 old-k8s-version-551944 kubelet[1457]: E0329 17:16:06.284365 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.183597 322664 logs.go:138] Found kubelet problem: Mar 29 17:16:12 old-k8s-version-551944 kubelet[1457]: E0329 17:16:12.312627 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0329 17:20:13.186257 322664 logs.go:138] Found kubelet problem: Mar 29 17:16:20 old-k8s-version-551944 kubelet[1457]: E0329 17:16:20.941638 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0329 17:20:13.186491 322664 logs.go:138] Found kubelet problem: Mar 29 17:16:24 old-k8s-version-551944 kubelet[1457]: E0329 17:16:24.260858 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.186741 322664 logs.go:138] Found kubelet problem: Mar 29 17:16:31 old-k8s-version-551944 kubelet[1457]: E0329 17:16:31.260176 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.186953 322664 logs.go:138] Found kubelet problem: Mar 29 17:16:39 old-k8s-version-551944 kubelet[1457]: E0329 17:16:39.265809 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.187177 322664 logs.go:138] Found kubelet problem: Mar 29 17:16:43 old-k8s-version-551944 kubelet[1457]: E0329 17:16:43.263922 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.187391 322664 logs.go:138] Found kubelet problem: Mar 29 17:16:50 old-k8s-version-551944 kubelet[1457]: E0329 17:16:50.260272 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.187618 322664 logs.go:138] Found kubelet problem: Mar 29 17:16:55 old-k8s-version-551944 kubelet[1457]: E0329 17:16:55.260304 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.187834 322664 logs.go:138] Found kubelet problem: Mar 29 17:17:01 old-k8s-version-551944 kubelet[1457]: E0329 17:17:01.260520 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.188065 322664 logs.go:138] Found kubelet problem: Mar 29 17:17:08 old-k8s-version-551944 kubelet[1457]: E0329 17:17:08.277299 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.188290 322664 logs.go:138] Found kubelet problem: Mar 29 17:17:15 old-k8s-version-551944 kubelet[1457]: E0329 17:17:15.260289 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.188520 322664 logs.go:138] Found kubelet problem: Mar 29 17:17:20 old-k8s-version-551944 kubelet[1457]: E0329 17:17:20.260684 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.188739 322664 logs.go:138] Found kubelet problem: Mar 29 17:17:29 old-k8s-version-551944 kubelet[1457]: E0329 17:17:29.260064 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.188970 322664 logs.go:138] Found kubelet problem: Mar 29 17:17:34 old-k8s-version-551944 kubelet[1457]: E0329 17:17:34.264202 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.192307 322664 logs.go:138] Found kubelet problem: Mar 29 17:17:40 old-k8s-version-551944 kubelet[1457]: E0329 17:17:40.287091 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0329 17:20:13.195497 322664 logs.go:138] Found kubelet problem: Mar 29 17:17:46 old-k8s-version-551944 kubelet[1457]: E0329 17:17:46.831876 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0329 17:20:13.195745 322664 logs.go:138] Found kubelet problem: Mar 29 17:17:53 old-k8s-version-551944 kubelet[1457]: E0329 17:17:53.260430 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.195979 322664 logs.go:138] Found kubelet problem: Mar 29 17:17:59 old-k8s-version-551944 kubelet[1457]: E0329 17:17:59.260070 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.196204 322664 logs.go:138] Found kubelet problem: Mar 29 17:18:04 old-k8s-version-551944 kubelet[1457]: E0329 17:18:04.260476 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.196480 322664 logs.go:138] Found kubelet problem: Mar 29 17:18:13 old-k8s-version-551944 kubelet[1457]: E0329 17:18:13.266456 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.196705 322664 logs.go:138] Found kubelet problem: Mar 29 17:18:15 old-k8s-version-551944 kubelet[1457]: E0329 17:18:15.260277 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.196935 322664 logs.go:138] Found kubelet problem: Mar 29 17:18:25 old-k8s-version-551944 kubelet[1457]: E0329 17:18:25.267475 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.197152 322664 logs.go:138] Found kubelet problem: Mar 29 17:18:27 old-k8s-version-551944 kubelet[1457]: E0329 17:18:27.260134 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.197382 322664 logs.go:138] Found kubelet problem: Mar 29 17:18:37 old-k8s-version-551944 kubelet[1457]: E0329 17:18:37.260145 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.197600 322664 logs.go:138] Found kubelet problem: Mar 29 17:18:42 old-k8s-version-551944 kubelet[1457]: E0329 17:18:42.261218 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.197831 322664 logs.go:138] Found kubelet problem: Mar 29 17:18:50 old-k8s-version-551944 kubelet[1457]: E0329 17:18:50.260639 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.198049 322664 logs.go:138] Found kubelet problem: Mar 29 17:18:57 old-k8s-version-551944 kubelet[1457]: E0329 17:18:57.260263 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.198281 322664 logs.go:138] Found kubelet problem: Mar 29 17:19:05 old-k8s-version-551944 kubelet[1457]: E0329 17:19:05.260268 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.198505 322664 logs.go:138] Found kubelet problem: Mar 29 17:19:09 old-k8s-version-551944 kubelet[1457]: E0329 17:19:09.260200 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.198776 322664 logs.go:138] Found kubelet problem: Mar 29 17:19:20 old-k8s-version-551944 kubelet[1457]: E0329 17:19:20.268357 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.198992 322664 logs.go:138] Found kubelet problem: Mar 29 17:19:22 old-k8s-version-551944 kubelet[1457]: E0329 17:19:22.264762 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.199223 322664 logs.go:138] Found kubelet problem: Mar 29 17:19:33 old-k8s-version-551944 kubelet[1457]: E0329 17:19:33.260820 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.199440 322664 logs.go:138] Found kubelet problem: Mar 29 17:19:37 old-k8s-version-551944 kubelet[1457]: E0329 17:19:37.260349 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.199671 322664 logs.go:138] Found kubelet problem: Mar 29 17:19:45 old-k8s-version-551944 kubelet[1457]: E0329 17:19:45.261516 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.199888 322664 logs.go:138] Found kubelet problem: Mar 29 17:19:50 old-k8s-version-551944 kubelet[1457]: E0329 17:19:50.260407 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.200150 322664 logs.go:138] Found kubelet problem: Mar 29 17:19:59 old-k8s-version-551944 kubelet[1457]: E0329 17:19:59.260075 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.200375 322664 logs.go:138] Found kubelet problem: Mar 29 17:20:03 old-k8s-version-551944 kubelet[1457]: E0329 17:20:03.260216 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0329 17:20:13.200408 322664 logs.go:123] Gathering logs for dmesg ...
I0329 17:20:13.200441 322664 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0329 17:20:13.219930 322664 logs.go:123] Gathering logs for etcd [f7b7044d3d79] ...
I0329 17:20:13.219960 322664 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7b7044d3d79"
I0329 17:20:13.271869 322664 logs.go:123] Gathering logs for kube-proxy [0c2204880506] ...
I0329 17:20:13.271949 322664 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c2204880506"
I0329 17:20:13.301465 322664 logs.go:123] Gathering logs for kube-proxy [e74725ac4203] ...
I0329 17:20:13.301496 322664 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e74725ac4203"
I0329 17:20:13.332933 322664 logs.go:123] Gathering logs for storage-provisioner [766ed8e00c6e] ...
I0329 17:20:13.332958 322664 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 766ed8e00c6e"
I0329 17:20:13.364131 322664 logs.go:123] Gathering logs for Docker ...
I0329 17:20:13.364161 322664 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0329 17:20:13.396246 322664 logs.go:123] Gathering logs for container status ...
I0329 17:20:13.396284 322664 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0329 17:20:13.469431 322664 out.go:358] Setting ErrFile to fd 2...
I0329 17:20:13.469458 322664 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0329 17:20:13.469504 322664 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W0329 17:20:13.469517 322664 out.go:270] Mar 29 17:19:37 old-k8s-version-551944 kubelet[1457]: E0329 17:19:37.260349 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 29 17:19:37 old-k8s-version-551944 kubelet[1457]: E0329 17:19:37.260349 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.469528 322664 out.go:270] Mar 29 17:19:45 old-k8s-version-551944 kubelet[1457]: E0329 17:19:45.261516 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Mar 29 17:19:45 old-k8s-version-551944 kubelet[1457]: E0329 17:19:45.261516 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.469536 322664 out.go:270] Mar 29 17:19:50 old-k8s-version-551944 kubelet[1457]: E0329 17:19:50.260407 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 29 17:19:50 old-k8s-version-551944 kubelet[1457]: E0329 17:19:50.260407 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.469548 322664 out.go:270] Mar 29 17:19:59 old-k8s-version-551944 kubelet[1457]: E0329 17:19:59.260075 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Mar 29 17:19:59 old-k8s-version-551944 kubelet[1457]: E0329 17:19:59.260075 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.469554 322664 out.go:270] Mar 29 17:20:03 old-k8s-version-551944 kubelet[1457]: E0329 17:20:03.260216 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 29 17:20:03 old-k8s-version-551944 kubelet[1457]: E0329 17:20:03.260216 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0329 17:20:13.469558 322664 out.go:358] Setting ErrFile to fd 2...
I0329 17:20:13.469572 322664 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0329 17:20:23.471578 322664 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0329 17:20:23.481722 322664 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I0329 17:20:23.485039 322664 out.go:201]
W0329 17:20:23.487931 322664 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0329 17:20:23.487968 322664 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
* Suggestion: Control Plane could not update, try minikube delete --all --purge
W0329 17:20:23.487991 322664 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
* Related issue: https://github.com/kubernetes/minikube/issues/11417
W0329 17:20:23.487996 322664 out.go:270] *
*
W0329 17:20:23.488867 322664 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0329 17:20:23.492835 322664 out.go:201]
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-551944 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=docker --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect old-k8s-version-551944
helpers_test.go:235: (dbg) docker inspect old-k8s-version-551944:
-- stdout --
[
{
"Id": "9e95e5888e5c98629b43d09b6dd4cb61e25e870022d5fa579fdd775aa8228938",
"Created": "2025-03-29T17:11:27.658918478Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 323138,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-03-29T17:14:09.159662047Z",
"FinishedAt": "2025-03-29T17:14:07.724252564Z"
},
"Image": "sha256:df0c2544fb3106b890f0a9ab81fcf49f97edb092b83e47f42288ad5dfe1f4b40",
"ResolvConfPath": "/var/lib/docker/containers/9e95e5888e5c98629b43d09b6dd4cb61e25e870022d5fa579fdd775aa8228938/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/9e95e5888e5c98629b43d09b6dd4cb61e25e870022d5fa579fdd775aa8228938/hostname",
"HostsPath": "/var/lib/docker/containers/9e95e5888e5c98629b43d09b6dd4cb61e25e870022d5fa579fdd775aa8228938/hosts",
"LogPath": "/var/lib/docker/containers/9e95e5888e5c98629b43d09b6dd4cb61e25e870022d5fa579fdd775aa8228938/9e95e5888e5c98629b43d09b6dd4cb61e25e870022d5fa579fdd775aa8228938-json.log",
"Name": "/old-k8s-version-551944",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"old-k8s-version-551944:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "old-k8s-version-551944",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "9e95e5888e5c98629b43d09b6dd4cb61e25e870022d5fa579fdd775aa8228938",
"LowerDir": "/var/lib/docker/overlay2/492e49fc0c591cbf2fe695c07674e75508d26498e0ac56bc59242e42eafdbe07-init/diff:/var/lib/docker/overlay2/d56b4577a51321e181abcd5d2c4d7cd31f04f1f861d51aed9bd7a96aff8949cd/diff",
"MergedDir": "/var/lib/docker/overlay2/492e49fc0c591cbf2fe695c07674e75508d26498e0ac56bc59242e42eafdbe07/merged",
"UpperDir": "/var/lib/docker/overlay2/492e49fc0c591cbf2fe695c07674e75508d26498e0ac56bc59242e42eafdbe07/diff",
"WorkDir": "/var/lib/docker/overlay2/492e49fc0c591cbf2fe695c07674e75508d26498e0ac56bc59242e42eafdbe07/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "old-k8s-version-551944",
"Source": "/var/lib/docker/volumes/old-k8s-version-551944/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "old-k8s-version-551944",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "old-k8s-version-551944",
"name.minikube.sigs.k8s.io": "old-k8s-version-551944",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "8420e8d8ebe9a6363af4363a755687024970bb9137a98cb005b982f61ac627bd",
"SandboxKey": "/var/run/docker/netns/8420e8d8ebe9",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33079"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33080"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33083"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33081"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33082"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"old-k8s-version-551944": {
"IPAMConfig": {
"IPv4Address": "192.168.76.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "4a:ac:8c:f7:ee:42",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "f412cd775402550126be4e4666ddb91fa146f2832b7d5a7e3fc21148c83fe99e",
"EndpointID": "5f2a44b7655a3e04b2f6baa31f75b72197efff4e0b43bac14269a44f9431902e",
"Gateway": "192.168.76.1",
"IPAddress": "192.168.76.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"old-k8s-version-551944",
"9e95e5888e5c"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-551944 -n old-k8s-version-551944
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p old-k8s-version-551944 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-551944 logs -n 25: (1.771153332s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
| ssh | docker-flags-489161 ssh | docker-flags-489161 | jenkins | v1.35.0 | 29 Mar 25 17:10 UTC | 29 Mar 25 17:10 UTC |
| | sudo systemctl show docker | | | | | |
| | --property=Environment | | | | | |
| | --no-pager | | | | | |
| ssh | docker-flags-489161 ssh | docker-flags-489161 | jenkins | v1.35.0 | 29 Mar 25 17:10 UTC | 29 Mar 25 17:10 UTC |
| | sudo systemctl show docker | | | | | |
| | --property=ExecStart | | | | | |
| | --no-pager | | | | | |
| delete | -p docker-flags-489161 | docker-flags-489161 | jenkins | v1.35.0 | 29 Mar 25 17:10 UTC | 29 Mar 25 17:10 UTC |
| start | -p cert-options-443754 | cert-options-443754 | jenkins | v1.35.0 | 29 Mar 25 17:10 UTC | 29 Mar 25 17:11 UTC |
| | --memory=2048 | | | | | |
| | --apiserver-ips=127.0.0.1 | | | | | |
| | --apiserver-ips=192.168.15.15 | | | | | |
| | --apiserver-names=localhost | | | | | |
| | --apiserver-names=www.google.com | | | | | |
| | --apiserver-port=8555 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| ssh | cert-options-443754 ssh | cert-options-443754 | jenkins | v1.35.0 | 29 Mar 25 17:11 UTC | 29 Mar 25 17:11 UTC |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-443754 -- sudo | cert-options-443754 | jenkins | v1.35.0 | 29 Mar 25 17:11 UTC | 29 Mar 25 17:11 UTC |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-443754 | cert-options-443754 | jenkins | v1.35.0 | 29 Mar 25 17:11 UTC | 29 Mar 25 17:11 UTC |
| start | -p old-k8s-version-551944 | old-k8s-version-551944 | jenkins | v1.35.0 | 29 Mar 25 17:11 UTC | 29 Mar 25 17:13 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| start | -p cert-expiration-638762 | cert-expiration-638762 | jenkins | v1.35.0 | 29 Mar 25 17:13 UTC | 29 Mar 25 17:14 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| addons | enable metrics-server -p old-k8s-version-551944 | old-k8s-version-551944 | jenkins | v1.35.0 | 29 Mar 25 17:13 UTC | 29 Mar 25 17:13 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p old-k8s-version-551944 | old-k8s-version-551944 | jenkins | v1.35.0 | 29 Mar 25 17:13 UTC | 29 Mar 25 17:14 UTC |
| | --alsologtostderr -v=3 | | | | | |
| delete | -p cert-expiration-638762 | cert-expiration-638762 | jenkins | v1.35.0 | 29 Mar 25 17:14 UTC | 29 Mar 25 17:14 UTC |
| start | -p | default-k8s-diff-port-455478 | jenkins | v1.35.0 | 29 Mar 25 17:14 UTC | 29 Mar 25 17:14 UTC |
| | default-k8s-diff-port-455478 | | | | | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --apiserver-port=8444 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --kubernetes-version=v1.32.2 | | | | | |
| addons | enable dashboard -p old-k8s-version-551944 | old-k8s-version-551944 | jenkins | v1.35.0 | 29 Mar 25 17:14 UTC | 29 Mar 25 17:14 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p old-k8s-version-551944 | old-k8s-version-551944 | jenkins | v1.35.0 | 29 Mar 25 17:14 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| addons | enable metrics-server -p default-k8s-diff-port-455478 | default-k8s-diff-port-455478 | jenkins | v1.35.0 | 29 Mar 25 17:15 UTC | 29 Mar 25 17:15 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p | default-k8s-diff-port-455478 | jenkins | v1.35.0 | 29 Mar 25 17:15 UTC | 29 Mar 25 17:15 UTC |
| | default-k8s-diff-port-455478 | | | | | |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p default-k8s-diff-port-455478 | default-k8s-diff-port-455478 | jenkins | v1.35.0 | 29 Mar 25 17:15 UTC | 29 Mar 25 17:15 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p | default-k8s-diff-port-455478 | jenkins | v1.35.0 | 29 Mar 25 17:15 UTC | 29 Mar 25 17:19 UTC |
| | default-k8s-diff-port-455478 | | | | | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --apiserver-port=8444 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --kubernetes-version=v1.32.2 | | | | | |
| image | default-k8s-diff-port-455478 | default-k8s-diff-port-455478 | jenkins | v1.35.0 | 29 Mar 25 17:20 UTC | 29 Mar 25 17:20 UTC |
| | image list --format=json | | | | | |
| pause | -p | default-k8s-diff-port-455478 | jenkins | v1.35.0 | 29 Mar 25 17:20 UTC | 29 Mar 25 17:20 UTC |
| | default-k8s-diff-port-455478 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p | default-k8s-diff-port-455478 | jenkins | v1.35.0 | 29 Mar 25 17:20 UTC | 29 Mar 25 17:20 UTC |
| | default-k8s-diff-port-455478 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p | default-k8s-diff-port-455478 | jenkins | v1.35.0 | 29 Mar 25 17:20 UTC | 29 Mar 25 17:20 UTC |
| | default-k8s-diff-port-455478 | | | | | |
| delete | -p | default-k8s-diff-port-455478 | jenkins | v1.35.0 | 29 Mar 25 17:20 UTC | 29 Mar 25 17:20 UTC |
| | default-k8s-diff-port-455478 | | | | | |
| start | -p embed-certs-805256 | embed-certs-805256 | jenkins | v1.35.0 | 29 Mar 25 17:20 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --embed-certs --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --kubernetes-version=v1.32.2 | | | | | |
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2025/03/29 17:20:13
Running on machine: ip-172-31-30-239
Binary: Built with gc go1.24.0 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0329 17:20:13.663219 340158 out.go:345] Setting OutFile to fd 1 ...
I0329 17:20:13.663373 340158 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0329 17:20:13.663394 340158 out.go:358] Setting ErrFile to fd 2...
I0329 17:20:13.663413 340158 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0329 17:20:13.663774 340158 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20470-2372/.minikube/bin
I0329 17:20:13.664267 340158 out.go:352] Setting JSON to false
I0329 17:20:13.666663 340158 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7365,"bootTime":1743261449,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1080-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
I0329 17:20:13.666774 340158 start.go:139] virtualization:
I0329 17:20:13.670179 340158 out.go:177] * [embed-certs-805256] minikube v1.35.0 on Ubuntu 20.04 (arm64)
I0329 17:20:13.673319 340158 out.go:177] - MINIKUBE_LOCATION=20470
I0329 17:20:13.673397 340158 notify.go:220] Checking for updates...
I0329 17:20:13.679375 340158 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0329 17:20:13.682396 340158 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20470-2372/kubeconfig
I0329 17:20:13.685485 340158 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20470-2372/.minikube
I0329 17:20:13.688502 340158 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0329 17:20:13.691488 340158 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0329 17:20:13.695183 340158 config.go:182] Loaded profile config "old-k8s-version-551944": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
I0329 17:20:13.695318 340158 driver.go:394] Setting default libvirt URI to qemu:///system
I0329 17:20:13.727787 340158 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
I0329 17:20:13.727977 340158 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0329 17:20:13.791021 340158 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-03-29 17:20:13.780825549 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1080-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:753481ec61c7c8955a23d6ff7bc8e4daed455734 Expected:753481ec61c7c8955a23d6ff7bc8e4daed455734} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0329 17:20:13.791153 340158 docker.go:318] overlay module found
I0329 17:20:13.794408 340158 out.go:177] * Using the docker driver based on user configuration
I0329 17:20:13.797273 340158 start.go:297] selected driver: docker
I0329 17:20:13.797297 340158 start.go:901] validating driver "docker" against <nil>
I0329 17:20:13.797312 340158 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0329 17:20:13.798073 340158 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0329 17:20:13.869035 340158 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-03-29 17:20:13.859978781 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1080-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:753481ec61c7c8955a23d6ff7bc8e4daed455734 Expected:753481ec61c7c8955a23d6ff7bc8e4daed455734} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0329 17:20:13.869184 340158 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0329 17:20:13.869445 340158 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0329 17:20:13.872435 340158 out.go:177] * Using Docker driver with root privileges
I0329 17:20:13.875391 340158 cni.go:84] Creating CNI manager for ""
I0329 17:20:13.875471 340158 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0329 17:20:13.875487 340158 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0329 17:20:13.875567 340158 start.go:340] cluster config:
{Name:embed-certs-805256 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-805256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0329 17:20:13.878657 340158 out.go:177] * Starting "embed-certs-805256" primary control-plane node in "embed-certs-805256" cluster
I0329 17:20:13.881523 340158 cache.go:121] Beginning downloading kic base image for docker with docker
I0329 17:20:13.884620 340158 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
I0329 17:20:13.887353 340158 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
I0329 17:20:13.887408 340158 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20470-2372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-arm64.tar.lz4
I0329 17:20:13.887421 340158 cache.go:56] Caching tarball of preloaded images
I0329 17:20:13.887456 340158 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
I0329 17:20:13.887512 340158 preload.go:172] Found /home/jenkins/minikube-integration/20470-2372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0329 17:20:13.887522 340158 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
I0329 17:20:13.887630 340158 profile.go:143] Saving config to /home/jenkins/minikube-integration/20470-2372/.minikube/profiles/embed-certs-805256/config.json ...
I0329 17:20:13.887650 340158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20470-2372/.minikube/profiles/embed-certs-805256/config.json: {Name:mk9b49ccb39bc1a6acd402c447eb859e0715751a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0329 17:20:13.907873 340158 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
I0329 17:20:13.907896 340158 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
I0329 17:20:13.907916 340158 cache.go:230] Successfully downloaded all kic artifacts
I0329 17:20:13.907946 340158 start.go:360] acquireMachinesLock for embed-certs-805256: {Name:mk13912f05aa218452628e0a7e618c03649625e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0329 17:20:13.908051 340158 start.go:364] duration metric: took 84.981µs to acquireMachinesLock for "embed-certs-805256"
I0329 17:20:13.908084 340158 start.go:93] Provisioning new machine with config: &{Name:embed-certs-805256 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-805256 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0329 17:20:13.908153 340158 start.go:125] createHost starting for "" (driver="docker")
I0329 17:20:13.469431 322664 out.go:358] Setting ErrFile to fd 2...
I0329 17:20:13.469458 322664 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0329 17:20:13.469504 322664 out.go:270] X Problems detected in kubelet:
W0329 17:20:13.469517 322664 out.go:270] Mar 29 17:19:37 old-k8s-version-551944 kubelet[1457]: E0329 17:19:37.260349 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.469528 322664 out.go:270] Mar 29 17:19:45 old-k8s-version-551944 kubelet[1457]: E0329 17:19:45.261516 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.469536 322664 out.go:270] Mar 29 17:19:50 old-k8s-version-551944 kubelet[1457]: E0329 17:19:50.260407 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.469548 322664 out.go:270] Mar 29 17:19:59 old-k8s-version-551944 kubelet[1457]: E0329 17:19:59.260075 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0329 17:20:13.469554 322664 out.go:270] Mar 29 17:20:03 old-k8s-version-551944 kubelet[1457]: E0329 17:20:03.260216 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0329 17:20:13.469558 322664 out.go:358] Setting ErrFile to fd 2...
I0329 17:20:13.469572 322664 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0329 17:20:13.911575 340158 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
I0329 17:20:13.911811 340158 start.go:159] libmachine.API.Create for "embed-certs-805256" (driver="docker")
I0329 17:20:13.911850 340158 client.go:168] LocalClient.Create starting
I0329 17:20:13.911939 340158 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20470-2372/.minikube/certs/ca.pem
I0329 17:20:13.911981 340158 main.go:141] libmachine: Decoding PEM data...
I0329 17:20:13.912003 340158 main.go:141] libmachine: Parsing certificate...
I0329 17:20:13.912064 340158 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20470-2372/.minikube/certs/cert.pem
I0329 17:20:13.912090 340158 main.go:141] libmachine: Decoding PEM data...
I0329 17:20:13.912101 340158 main.go:141] libmachine: Parsing certificate...
I0329 17:20:13.912443 340158 cli_runner.go:164] Run: docker network inspect embed-certs-805256 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0329 17:20:13.928619 340158 cli_runner.go:211] docker network inspect embed-certs-805256 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0329 17:20:13.928703 340158 network_create.go:284] running [docker network inspect embed-certs-805256] to gather additional debugging logs...
I0329 17:20:13.928724 340158 cli_runner.go:164] Run: docker network inspect embed-certs-805256
W0329 17:20:13.945387 340158 cli_runner.go:211] docker network inspect embed-certs-805256 returned with exit code 1
I0329 17:20:13.945425 340158 network_create.go:287] error running [docker network inspect embed-certs-805256]: docker network inspect embed-certs-805256: exit status 1
stdout:
[]
stderr:
Error response from daemon: network embed-certs-805256 not found
I0329 17:20:13.945439 340158 network_create.go:289] output of [docker network inspect embed-certs-805256]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network embed-certs-805256 not found
** /stderr **
I0329 17:20:13.945549 340158 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0329 17:20:13.961175 340158 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ed6c7f633f68 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c6:31:a9:4f:84:84} reservation:<nil>}
I0329 17:20:13.961543 340158 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ab94aad101d5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6a:ae:46:b1:30:af} reservation:<nil>}
I0329 17:20:13.961890 340158 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b5fcac9f4aaf IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:46:53:f0:07:85:3f} reservation:<nil>}
I0329 17:20:13.962219 340158 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-f412cd775402 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:0a:93:fb:99:2a:16} reservation:<nil>}
I0329 17:20:13.962988 340158 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019eed80}
I0329 17:20:13.963028 340158 network_create.go:124] attempt to create docker network embed-certs-805256 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
I0329 17:20:13.963109 340158 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-805256 embed-certs-805256
I0329 17:20:14.028288 340158 network_create.go:108] docker network embed-certs-805256 192.168.85.0/24 created
I0329 17:20:14.028322 340158 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-805256" container
I0329 17:20:14.028399 340158 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0329 17:20:14.045395 340158 cli_runner.go:164] Run: docker volume create embed-certs-805256 --label name.minikube.sigs.k8s.io=embed-certs-805256 --label created_by.minikube.sigs.k8s.io=true
I0329 17:20:14.064629 340158 oci.go:103] Successfully created a docker volume embed-certs-805256
I0329 17:20:14.064713 340158 cli_runner.go:164] Run: docker run --rm --name embed-certs-805256-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-805256 --entrypoint /usr/bin/test -v embed-certs-805256:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -d /var/lib
I0329 17:20:14.611621 340158 oci.go:107] Successfully prepared a docker volume embed-certs-805256
I0329 17:20:14.611672 340158 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
I0329 17:20:14.611691 340158 kic.go:194] Starting extracting preloaded images to volume ...
I0329 17:20:14.611764 340158 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20470-2372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-805256:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir
I0329 17:20:18.480506 340158 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20470-2372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-805256:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir: (3.868701955s)
I0329 17:20:18.480534 340158 kic.go:203] duration metric: took 3.868839951s to extract preloaded images to volume ...
W0329 17:20:18.480703 340158 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0329 17:20:18.480821 340158 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0329 17:20:18.537264 340158 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-805256 --name embed-certs-805256 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-805256 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-805256 --network embed-certs-805256 --ip 192.168.85.2 --volume embed-certs-805256:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
I0329 17:20:23.471578 322664 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0329 17:20:23.481722 322664 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I0329 17:20:23.485039 322664 out.go:201]
W0329 17:20:23.487931 322664 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0329 17:20:23.487968 322664 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
W0329 17:20:23.487991 322664 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
W0329 17:20:23.487996 322664 out.go:270] *
W0329 17:20:23.488867 322664 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0329 17:20:23.492835 322664 out.go:201]
I0329 17:20:18.840125 340158 cli_runner.go:164] Run: docker container inspect embed-certs-805256 --format={{.State.Running}}
I0329 17:20:18.861673 340158 cli_runner.go:164] Run: docker container inspect embed-certs-805256 --format={{.State.Status}}
I0329 17:20:18.884580 340158 cli_runner.go:164] Run: docker exec embed-certs-805256 stat /var/lib/dpkg/alternatives/iptables
I0329 17:20:18.940897 340158 oci.go:144] the created container "embed-certs-805256" has a running status.
I0329 17:20:18.940923 340158 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20470-2372/.minikube/machines/embed-certs-805256/id_rsa...
I0329 17:20:20.689919 340158 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20470-2372/.minikube/machines/embed-certs-805256/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0329 17:20:20.709337 340158 cli_runner.go:164] Run: docker container inspect embed-certs-805256 --format={{.State.Status}}
I0329 17:20:20.727489 340158 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0329 17:20:20.727514 340158 kic_runner.go:114] Args: [docker exec --privileged embed-certs-805256 chown docker:docker /home/docker/.ssh/authorized_keys]
I0329 17:20:20.768280 340158 cli_runner.go:164] Run: docker container inspect embed-certs-805256 --format={{.State.Status}}
I0329 17:20:20.787910 340158 machine.go:93] provisionDockerMachine start ...
I0329 17:20:20.787995 340158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805256
I0329 17:20:20.805296 340158 main.go:141] libmachine: Using SSH client type: native
I0329 17:20:20.805738 340158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33089 <nil> <nil>}
I0329 17:20:20.805751 340158 main.go:141] libmachine: About to run SSH command:
hostname
I0329 17:20:20.925716 340158 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-805256
I0329 17:20:20.925743 340158 ubuntu.go:169] provisioning hostname "embed-certs-805256"
I0329 17:20:20.925803 340158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805256
I0329 17:20:20.945436 340158 main.go:141] libmachine: Using SSH client type: native
I0329 17:20:20.945758 340158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33089 <nil> <nil>}
I0329 17:20:20.945775 340158 main.go:141] libmachine: About to run SSH command:
sudo hostname embed-certs-805256 && echo "embed-certs-805256" | sudo tee /etc/hostname
I0329 17:20:21.088013 340158 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-805256
I0329 17:20:21.088109 340158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805256
I0329 17:20:21.106795 340158 main.go:141] libmachine: Using SSH client type: native
I0329 17:20:21.107116 340158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33089 <nil> <nil>}
I0329 17:20:21.107141 340158 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sembed-certs-805256' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-805256/g' /etc/hosts;
else
echo '127.0.1.1 embed-certs-805256' | sudo tee -a /etc/hosts;
fi
fi
I0329 17:20:21.234462 340158 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0329 17:20:21.234488 340158 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20470-2372/.minikube CaCertPath:/home/jenkins/minikube-integration/20470-2372/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20470-2372/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20470-2372/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20470-2372/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20470-2372/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20470-2372/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20470-2372/.minikube}
I0329 17:20:21.234512 340158 ubuntu.go:177] setting up certificates
I0329 17:20:21.234549 340158 provision.go:84] configureAuth start
I0329 17:20:21.234611 340158 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-805256
I0329 17:20:21.252638 340158 provision.go:143] copyHostCerts
I0329 17:20:21.252712 340158 exec_runner.go:144] found /home/jenkins/minikube-integration/20470-2372/.minikube/ca.pem, removing ...
I0329 17:20:21.252726 340158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20470-2372/.minikube/ca.pem
I0329 17:20:21.252801 340158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20470-2372/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20470-2372/.minikube/ca.pem (1082 bytes)
I0329 17:20:21.252895 340158 exec_runner.go:144] found /home/jenkins/minikube-integration/20470-2372/.minikube/cert.pem, removing ...
I0329 17:20:21.252905 340158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20470-2372/.minikube/cert.pem
I0329 17:20:21.252934 340158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20470-2372/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20470-2372/.minikube/cert.pem (1123 bytes)
I0329 17:20:21.252997 340158 exec_runner.go:144] found /home/jenkins/minikube-integration/20470-2372/.minikube/key.pem, removing ...
I0329 17:20:21.253008 340158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20470-2372/.minikube/key.pem
I0329 17:20:21.253036 340158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20470-2372/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20470-2372/.minikube/key.pem (1679 bytes)
I0329 17:20:21.253095 340158 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20470-2372/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20470-2372/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20470-2372/.minikube/certs/ca-key.pem org=jenkins.embed-certs-805256 san=[127.0.0.1 192.168.85.2 embed-certs-805256 localhost minikube]
I0329 17:20:21.471301 340158 provision.go:177] copyRemoteCerts
I0329 17:20:21.471376 340158 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0329 17:20:21.471416 340158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805256
I0329 17:20:21.489201 340158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/20470-2372/.minikube/machines/embed-certs-805256/id_rsa Username:docker}
I0329 17:20:21.579312 340158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2372/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0329 17:20:21.606025 340158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2372/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0329 17:20:21.637008 340158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2372/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I0329 17:20:21.663183 340158 provision.go:87] duration metric: took 428.613624ms to configureAuth
I0329 17:20:21.663256 340158 ubuntu.go:193] setting minikube options for container-runtime
I0329 17:20:21.663457 340158 config.go:182] Loaded profile config "embed-certs-805256": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0329 17:20:21.663522 340158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805256
I0329 17:20:21.680562 340158 main.go:141] libmachine: Using SSH client type: native
I0329 17:20:21.680933 340158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33089 <nil> <nil>}
I0329 17:20:21.680951 340158 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0329 17:20:21.819172 340158 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0329 17:20:21.819196 340158 ubuntu.go:71] root file system type: overlay
I0329 17:20:21.819299 340158 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0329 17:20:21.819382 340158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805256
I0329 17:20:21.837124 340158 main.go:141] libmachine: Using SSH client type: native
I0329 17:20:21.837435 340158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33089 <nil> <nil>}
I0329 17:20:21.837512 340158 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0329 17:20:21.974981 340158 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0329 17:20:21.975066 340158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805256
I0329 17:20:21.993523 340158 main.go:141] libmachine: Using SSH client type: native
I0329 17:20:21.993828 340158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33089 <nil> <nil>}
I0329 17:20:21.993865 340158 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0329 17:20:22.862197 340158 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2025-02-26 10:39:24.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2025-03-29 17:20:21.966597399 +0000
@@ -1,46 +1,49 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
+LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0329 17:20:22.862229 340158 machine.go:96] duration metric: took 2.074299636s to provisionDockerMachine
I0329 17:20:22.862239 340158 client.go:171] duration metric: took 8.950377798s to LocalClient.Create
I0329 17:20:22.862251 340158 start.go:167] duration metric: took 8.950442044s to libmachine.API.Create "embed-certs-805256"
I0329 17:20:22.862258 340158 start.go:293] postStartSetup for "embed-certs-805256" (driver="docker")
I0329 17:20:22.862268 340158 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0329 17:20:22.862329 340158 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0329 17:20:22.862380 340158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805256
I0329 17:20:22.879200 340158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/20470-2372/.minikube/machines/embed-certs-805256/id_rsa Username:docker}
I0329 17:20:22.977752 340158 ssh_runner.go:195] Run: cat /etc/os-release
I0329 17:20:22.980949 340158 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0329 17:20:22.980985 340158 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0329 17:20:22.980997 340158 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0329 17:20:22.981005 340158 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0329 17:20:22.981015 340158 filesync.go:126] Scanning /home/jenkins/minikube-integration/20470-2372/.minikube/addons for local assets ...
I0329 17:20:22.981075 340158 filesync.go:126] Scanning /home/jenkins/minikube-integration/20470-2372/.minikube/files for local assets ...
I0329 17:20:22.981177 340158 filesync.go:149] local asset: /home/jenkins/minikube-integration/20470-2372/.minikube/files/etc/ssl/certs/77082.pem -> 77082.pem in /etc/ssl/certs
I0329 17:20:22.981284 340158 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0329 17:20:22.990236 340158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20470-2372/.minikube/files/etc/ssl/certs/77082.pem --> /etc/ssl/certs/77082.pem (1708 bytes)
I0329 17:20:23.016170 340158 start.go:296] duration metric: took 153.898326ms for postStartSetup
I0329 17:20:23.016571 340158 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-805256
I0329 17:20:23.034416 340158 profile.go:143] Saving config to /home/jenkins/minikube-integration/20470-2372/.minikube/profiles/embed-certs-805256/config.json ...
I0329 17:20:23.034772 340158 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0329 17:20:23.034825 340158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805256
I0329 17:20:23.051997 340158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/20470-2372/.minikube/machines/embed-certs-805256/id_rsa Username:docker}
I0329 17:20:23.147077 340158 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0329 17:20:23.153362 340158 start.go:128] duration metric: took 9.245195591s to createHost
I0329 17:20:23.153388 340158 start.go:83] releasing machines lock for "embed-certs-805256", held for 9.24532141s
I0329 17:20:23.153453 340158 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-805256
I0329 17:20:23.170130 340158 ssh_runner.go:195] Run: cat /version.json
I0329 17:20:23.170186 340158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805256
I0329 17:20:23.170438 340158 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0329 17:20:23.170504 340158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805256
I0329 17:20:23.188394 340158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/20470-2372/.minikube/machines/embed-certs-805256/id_rsa Username:docker}
I0329 17:20:23.191147 340158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33089 SSHKeyPath:/home/jenkins/minikube-integration/20470-2372/.minikube/machines/embed-certs-805256/id_rsa Username:docker}
I0329 17:20:23.277865 340158 ssh_runner.go:195] Run: systemctl --version
I0329 17:20:23.415373 340158 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0329 17:20:23.419884 340158 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0329 17:20:23.446792 340158 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0329 17:20:23.446893 340158 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0329 17:20:23.480206 340158 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0329 17:20:23.480272 340158 start.go:498] detecting cgroup driver to use...
I0329 17:20:23.480319 340158 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0329 17:20:23.480441 340158 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0329 17:20:23.519564 340158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0329 17:20:23.545104 340158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0329 17:20:23.564426 340158 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0329 17:20:23.564493 340158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0329 17:20:23.575320 340158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0329 17:20:23.587253 340158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0329 17:20:23.596792 340158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0329 17:20:23.608289 340158 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0329 17:20:23.618434 340158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0329 17:20:23.628708 340158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0329 17:20:23.662405 340158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
==> Docker <==
Mar 29 17:15:13 old-k8s-version-551944 dockerd[1129]: time="2025-03-29T17:15:13.511687908Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
Mar 29 17:15:13 old-k8s-version-551944 dockerd[1129]: time="2025-03-29T17:15:13.719338556Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
Mar 29 17:15:13 old-k8s-version-551944 dockerd[1129]: time="2025-03-29T17:15:13.719442007Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
Mar 29 17:15:13 old-k8s-version-551944 dockerd[1129]: time="2025-03-29T17:15:13.719473145Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
Mar 29 17:15:22 old-k8s-version-551944 dockerd[1129]: time="2025-03-29T17:15:22.320357636Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Mar 29 17:15:22 old-k8s-version-551944 dockerd[1129]: time="2025-03-29T17:15:22.320398129Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Mar 29 17:15:22 old-k8s-version-551944 dockerd[1129]: time="2025-03-29T17:15:22.324548338Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Mar 29 17:15:38 old-k8s-version-551944 dockerd[1129]: time="2025-03-29T17:15:38.553721262Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
Mar 29 17:15:38 old-k8s-version-551944 dockerd[1129]: time="2025-03-29T17:15:38.765725997Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
Mar 29 17:15:38 old-k8s-version-551944 dockerd[1129]: time="2025-03-29T17:15:38.765908332Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
Mar 29 17:15:38 old-k8s-version-551944 dockerd[1129]: time="2025-03-29T17:15:38.765937887Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
Mar 29 17:16:12 old-k8s-version-551944 dockerd[1129]: time="2025-03-29T17:16:12.307249755Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Mar 29 17:16:12 old-k8s-version-551944 dockerd[1129]: time="2025-03-29T17:16:12.307706251Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Mar 29 17:16:12 old-k8s-version-551944 dockerd[1129]: time="2025-03-29T17:16:12.312090299Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Mar 29 17:16:20 old-k8s-version-551944 dockerd[1129]: time="2025-03-29T17:16:20.621341505Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
Mar 29 17:16:20 old-k8s-version-551944 dockerd[1129]: time="2025-03-29T17:16:20.938120786Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
Mar 29 17:16:20 old-k8s-version-551944 dockerd[1129]: time="2025-03-29T17:16:20.938237266Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
Mar 29 17:16:20 old-k8s-version-551944 dockerd[1129]: time="2025-03-29T17:16:20.938275880Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
Mar 29 17:17:40 old-k8s-version-551944 dockerd[1129]: time="2025-03-29T17:17:40.282294384Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Mar 29 17:17:40 old-k8s-version-551944 dockerd[1129]: time="2025-03-29T17:17:40.282340521Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Mar 29 17:17:40 old-k8s-version-551944 dockerd[1129]: time="2025-03-29T17:17:40.286300548Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Mar 29 17:17:46 old-k8s-version-551944 dockerd[1129]: time="2025-03-29T17:17:46.612148159Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
Mar 29 17:17:46 old-k8s-version-551944 dockerd[1129]: time="2025-03-29T17:17:46.824733266Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
Mar 29 17:17:46 old-k8s-version-551944 dockerd[1129]: time="2025-03-29T17:17:46.824911630Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
Mar 29 17:17:46 old-k8s-version-551944 dockerd[1129]: time="2025-03-29T17:17:46.824947766Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
b4f0beb7b2eb8 ba04bb24b9575 5 minutes ago Running storage-provisioner 2 80ffce7c6eca5 storage-provisioner
ab0ca8585388c kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 5 minutes ago Running kubernetes-dashboard 0 a4025aecaeed7 kubernetes-dashboard-cd95d586-52mcs
766ed8e00c6e0 ba04bb24b9575 5 minutes ago Exited storage-provisioner 1 80ffce7c6eca5 storage-provisioner
0c80cbb76391a db91994f4ee8f 5 minutes ago Running coredns 1 1022dda77d01d coredns-74ff55c5b-b9w86
0c22048805069 25a5233254979 5 minutes ago Running kube-proxy 1 e42f05e63dcc4 kube-proxy-pmp5k
0995033c5a0b4 1611cd07b61d5 5 minutes ago Running busybox 1 ae40543c0b815 busybox
f7b7044d3d794 05b738aa1bc63 6 minutes ago Running etcd 1 bcb399211f752 etcd-old-k8s-version-551944
945138d280daa 1df8a2b116bd1 6 minutes ago Running kube-controller-manager 1 6303b1a6e11dd kube-controller-manager-old-k8s-version-551944
bc8afe7816f3a 2c08bbbc02d3a 6 minutes ago Running kube-apiserver 1 56f9d24836a4d kube-apiserver-old-k8s-version-551944
99605fcde49b9 e7605f88f17d6 6 minutes ago Running kube-scheduler 1 e21ef5a00d398 kube-scheduler-old-k8s-version-551944
0898b1ec2ed6e gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 6 minutes ago Exited busybox 0 d7f7e420fe7a3 busybox
e74725ac4203a 25a5233254979 8 minutes ago Exited kube-proxy 0 df3fec14b163a kube-proxy-pmp5k
8e2a8ebfddb3f db91994f4ee8f 8 minutes ago Exited coredns 0 eca0fc08aa37f coredns-74ff55c5b-b9w86
f2fc2725f63d0 e7605f88f17d6 8 minutes ago Exited kube-scheduler 0 2e3bc47090e54 kube-scheduler-old-k8s-version-551944
492d0056000dc 1df8a2b116bd1 8 minutes ago Exited kube-controller-manager 0 7db20cb31f039 kube-controller-manager-old-k8s-version-551944
9ed3257bd77f8 2c08bbbc02d3a 8 minutes ago Exited kube-apiserver 0 10b3339ac04b8 kube-apiserver-old-k8s-version-551944
6c74b83fdc73f 05b738aa1bc63 8 minutes ago Exited etcd 0 ce505553944cf etcd-old-k8s-version-551944
==> coredns [0c80cbb76391] <==
I0329 17:15:08.883114 1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-03-29 17:14:38.881169429 +0000 UTC m=+0.066869731) (total time: 30.000594689s):
Trace[2019727887]: [30.000594689s] [30.000594689s] END
E0329 17:15:08.883151 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0329 17:15:08.888351 1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-03-29 17:14:38.887973591 +0000 UTC m=+0.073673893) (total time: 30.00035467s):
Trace[939984059]: [30.00035467s] [30.00035467s] END
E0329 17:15:08.888371 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0329 17:15:08.888558 1 trace.go:116] Trace[1474941318]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-03-29 17:14:38.8883273 +0000 UTC m=+0.074027603) (total time: 30.000212826s):
Trace[1474941318]: [30.000212826s] [30.000212826s] END
E0329 17:15:08.888574 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/ready: Still waiting on: "kubernetes"
.:53
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:34259 - 56021 "HINFO IN 6331740427461607098.9013381346051582084. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.005069101s
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
==> coredns [8e2a8ebfddb3] <==
[INFO] plugin/ready: Still waiting on: "kubernetes"
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] Reloading
[INFO] plugin/health: Going into lameduck mode for 5s
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
[INFO] Reloading complete
[INFO] 127.0.0.1:59735 - 24720 "HINFO IN 7250072121050797060.4912759278564538470. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01442848s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
I0329 17:12:52.755015 1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-03-29 17:12:22.754352119 +0000 UTC m=+0.066741726) (total time: 30.0005508s):
Trace[2019727887]: [30.0005508s] [30.0005508s] END
E0329 17:12:52.755048 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0329 17:12:52.755303 1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-03-29 17:12:22.75493335 +0000 UTC m=+0.067322966) (total time: 30.000352914s):
Trace[939984059]: [30.000352914s] [30.000352914s] END
E0329 17:12:52.755319 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0329 17:12:52.758123 1 trace.go:116] Trace[1474941318]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-03-29 17:12:22.755161906 +0000 UTC m=+0.067551522) (total time: 30.002940603s):
Trace[1474941318]: [30.002940603s] [30.002940603s] END
E0329 17:12:52.758140 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
==> describe nodes <==
Name: old-k8s-version-551944
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=old-k8s-version-551944
kubernetes.io/os=linux
minikube.k8s.io/commit=9e4fb25ec9c9ec7d3315da8ba61a31fdfa364d77
minikube.k8s.io/name=old-k8s-version-551944
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_03_29T17_12_06_0700
minikube.k8s.io/version=v1.35.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 29 Mar 2025 17:12:03 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: old-k8s-version-551944
AcquireTime: <unset>
RenewTime: Sat, 29 Mar 2025 17:20:18 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 29 Mar 2025 17:15:27 +0000 Sat, 29 Mar 2025 17:11:56 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 29 Mar 2025 17:15:27 +0000 Sat, 29 Mar 2025 17:11:56 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 29 Mar 2025 17:15:27 +0000 Sat, 29 Mar 2025 17:11:56 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 29 Mar 2025 17:15:27 +0000 Sat, 29 Mar 2025 17:12:20 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.76.2
Hostname: old-k8s-version-551944
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022296Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022296Ki
pods: 110
System Info:
Machine ID: 2c1999afede646adab5640a9d55d469a
System UUID: 410f045d-ea5d-4b69-8e17-5161ed92500a
Boot ID: d543d4e4-ed3b-41b3-b616-48919959d704
Kernel Version: 5.15.0-1080-aws
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: docker://28.0.1
Kubelet Version: v1.20.0
Kube-Proxy Version: v1.20.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (11 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m44s
kube-system coredns-74ff55c5b-b9w86 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 8m4s
kube-system etcd-old-k8s-version-551944 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 8m15s
kube-system kube-apiserver-old-k8s-version-551944 250m (12%) 0 (0%) 0 (0%) 0 (0%) 8m15s
kube-system kube-controller-manager-old-k8s-version-551944 200m (10%) 0 (0%) 0 (0%) 0 (0%) 8m15s
kube-system kube-proxy-pmp5k 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m4s
kube-system kube-scheduler-old-k8s-version-551944 100m (5%) 0 (0%) 0 (0%) 0 (0%) 8m15s
kube-system metrics-server-9975d5f86-wrjnc 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 6m30s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m2s
kubernetes-dashboard dashboard-metrics-scraper-8d5bb5db8-r8mng 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m31s
kubernetes-dashboard kubernetes-dashboard-cd95d586-52mcs 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m31s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 370Mi (4%) 170Mi (2%)
ephemeral-storage 100Mi (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 8m30s (x5 over 8m30s) kubelet Node old-k8s-version-551944 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m30s (x5 over 8m30s) kubelet Node old-k8s-version-551944 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m30s (x4 over 8m30s) kubelet Node old-k8s-version-551944 status is now: NodeHasSufficientPID
Normal Starting 8m16s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 8m16s kubelet Node old-k8s-version-551944 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m16s kubelet Node old-k8s-version-551944 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m16s kubelet Node old-k8s-version-551944 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8m15s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 8m5s kubelet Node old-k8s-version-551944 status is now: NodeReady
Normal Starting 8m2s kube-proxy Starting kube-proxy.
Normal Starting 6m3s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 6m3s (x8 over 6m3s) kubelet Node old-k8s-version-551944 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 6m3s (x8 over 6m3s) kubelet Node old-k8s-version-551944 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 6m3s (x7 over 6m3s) kubelet Node old-k8s-version-551944 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 6m3s kubelet Updated Node Allocatable limit across pods
Normal Starting 5m46s kube-proxy Starting kube-proxy.
==> dmesg <==
[Mar29 15:17] ACPI: SRAT not present
[ +0.000000] ACPI: SRAT not present
[ +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
[ +0.014962] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[ +0.508000] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[ +0.032967] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
[ +0.774639] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
[ +6.651387] kauditd_printk_skb: 36 callbacks suppressed
[Mar29 16:32] systemd-journald[221]: Failed to send stream file descriptor to service manager: Connection refused
[Mar29 17:03] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
==> etcd [6c74b83fdc73] <==
raft2025/03/29 17:11:57 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
raft2025/03/29 17:11:57 INFO: ea7e25599daad906 became leader at term 2
raft2025/03/29 17:11:57 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
2025-03-29 17:11:57.220396 I | etcdserver: published {Name:old-k8s-version-551944 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
2025-03-29 17:11:57.220652 I | embed: ready to serve client requests
2025-03-29 17:11:57.230849 I | etcdserver: setting up the initial cluster version to 3.4
2025-03-29 17:11:57.231415 I | embed: ready to serve client requests
2025-03-29 17:11:57.231603 N | etcdserver/membership: set the initial cluster version to 3.4
2025-03-29 17:11:57.231738 I | etcdserver/api: enabled capabilities for version 3.4
2025-03-29 17:11:57.231800 I | embed: serving client requests on 192.168.76.2:2379
2025-03-29 17:11:57.233275 I | embed: serving client requests on 127.0.0.1:2379
2025-03-29 17:12:18.959757 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:12:20.430277 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:12:30.430110 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:12:40.430284 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:12:50.430259 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:13:00.430385 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:13:10.430147 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:13:20.430323 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:13:30.430330 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:13:40.430310 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:13:50.430284 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:13:55.489997 N | pkg/osutil: received terminated signal, shutting down...
WARNING: 2025/03/29 17:13:55 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
2025-03-29 17:13:55.526414 I | etcdserver: skipped leadership transfer for single voting member cluster
==> etcd [f7b7044d3d79] <==
2025-03-29 17:16:15.819818 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:16:25.819789 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:16:35.823092 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:16:45.819711 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:16:55.820107 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:17:05.819759 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:17:15.819783 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:17:25.819673 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:17:35.819904 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:17:45.819847 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:17:55.819899 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:18:05.819798 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:18:15.819773 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:18:25.819912 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:18:35.819727 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:18:45.819719 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:18:55.819730 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:19:05.819772 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:19:15.819786 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:19:25.819905 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:19:35.819702 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:19:45.819886 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:19:55.819735 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:20:05.820014 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-03-29 17:20:15.819887 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> kernel <==
17:20:25 up 2:02, 0 users, load average: 2.37, 2.19, 2.72
Linux old-k8s-version-551944 5.15.0-1080-aws #87~20.04.1-Ubuntu SMP Tue Mar 4 10:57:22 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kube-apiserver [9ed3257bd77f] <==
W0329 17:14:05.116248 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0329 17:14:05.128316 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0329 17:14:05.128317 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0329 17:14:05.144238 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0329 17:14:05.163922 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0329 17:14:05.165795 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0329 17:14:05.199434 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0329 17:14:05.205419 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0329 17:14:05.216248 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0329 17:14:05.218126 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0329 17:14:05.243278 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0329 17:14:05.261028 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0329 17:14:05.293734 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0329 17:14:05.320942 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0329 17:14:05.357575 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0329 17:14:05.389972 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0329 17:14:05.445133 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0329 17:14:05.445505 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0329 17:14:05.450600 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0329 17:14:05.452400 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0329 17:14:05.481027 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0329 17:14:05.501899 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0329 17:14:05.516649 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0329 17:14:05.523382 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0329 17:14:05.562083 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
==> kube-apiserver [bc8afe7816f3] <==
I0329 17:17:01.056318 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0329 17:17:01.056327 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0329 17:17:39.215346 1 handler_proxy.go:102] no RequestInfo found in the context
E0329 17:17:39.215435 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0329 17:17:39.215443 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0329 17:17:45.151463 1 client.go:360] parsed scheme: "passthrough"
I0329 17:17:45.151561 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0329 17:17:45.151574 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0329 17:18:18.381545 1 client.go:360] parsed scheme: "passthrough"
I0329 17:18:18.381598 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0329 17:18:18.381607 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0329 17:18:55.962053 1 client.go:360] parsed scheme: "passthrough"
I0329 17:18:55.962097 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0329 17:18:55.962107 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0329 17:19:27.298776 1 client.go:360] parsed scheme: "passthrough"
I0329 17:19:27.298823 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0329 17:19:27.298834 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0329 17:19:36.584313 1 handler_proxy.go:102] no RequestInfo found in the context
E0329 17:19:36.584387 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0329 17:19:36.584395 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0329 17:20:11.248521 1 client.go:360] parsed scheme: "passthrough"
I0329 17:20:11.248577 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0329 17:20:11.248766 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-controller-manager [492d0056000d] <==
I0329 17:12:21.569070 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown
I0329 17:12:21.586353 1 shared_informer.go:247] Caches are synced for crt configmap
I0329 17:12:21.587036 1 shared_informer.go:247] Caches are synced for bootstrap_signer
I0329 17:12:21.620277 1 shared_informer.go:247] Caches are synced for certificate-csrapproving
I0329 17:12:21.656626 1 shared_informer.go:247] Caches are synced for resource quota
I0329 17:12:21.659528 1 shared_informer.go:247] Caches are synced for stateful set
I0329 17:12:21.673499 1 shared_informer.go:247] Caches are synced for daemon sets
I0329 17:12:21.687555 1 shared_informer.go:247] Caches are synced for resource quota
I0329 17:12:21.738358 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-pmp5k"
E0329 17:12:21.805428 1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"8ec8f939-52be-4e77-96dc-7441196330b4", ResourceVersion:"268", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63878865126, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x40019bdb00), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x40019bdb20)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0x40019bdb40), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4001333000), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40019bd
b60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40019bdb80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40019bdbc0)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4000f92060), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000581ff8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000540460), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4000741380)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40002d40a8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
I0329 17:12:21.811325 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0329 17:12:22.111493 1 shared_informer.go:247] Caches are synced for garbage collector
I0329 17:12:22.120055 1 shared_informer.go:247] Caches are synced for garbage collector
I0329 17:12:22.120109 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0329 17:12:23.183644 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
I0329 17:12:23.207478 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-bw47f"
I0329 17:13:54.088373 1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
I0329 17:13:54.179586 1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-9975d5f86-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
E0329 17:13:54.203130 1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with pods "metrics-server-9975d5f86-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
I0329 17:13:55.362997 1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-9975d5f86-wrjnc"
E0329 17:13:55.544622 1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with Get "https://192.168.76.2:8443/apis/apps/v1/namespaces/kube-system/replicasets/metrics-server-9975d5f86": dial tcp 192.168.76.2:8443: connect: connection refused
E0329 17:13:55.545279 1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with Get "https://192.168.76.2:8443/apis/apps/v1/namespaces/kube-system/replicasets/metrics-server-9975d5f86": dial tcp 192.168.76.2:8443: connect: connection refused
E0329 17:13:55.552708 1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with Get "https://192.168.76.2:8443/apis/apps/v1/namespaces/kube-system/replicasets/metrics-server-9975d5f86": dial tcp 192.168.76.2:8443: connect: connection refused
E0329 17:13:55.577220 1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with Get "https://192.168.76.2:8443/apis/apps/v1/namespaces/kube-system/replicasets/metrics-server-9975d5f86": dial tcp 192.168.76.2:8443: connect: connection refused
E0329 17:13:55.618148 1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with Get "https://192.168.76.2:8443/apis/apps/v1/namespaces/kube-system/replicasets/metrics-server-9975d5f86": dial tcp 192.168.76.2:8443: connect: connection refused
==> kube-controller-manager [945138d280da] <==
W0329 17:16:01.602269 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0329 17:16:26.400491 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0329 17:16:33.252575 1 request.go:655] Throttling request took 1.047594405s, request: GET:https://192.168.76.2:8443/apis/authentication.k8s.io/v1beta1?timeout=32s
W0329 17:16:34.104096 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0329 17:16:56.902298 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0329 17:17:05.754676 1 request.go:655] Throttling request took 1.048587617s, request: GET:https://192.168.76.2:8443/apis/batch/v1beta1?timeout=32s
W0329 17:17:06.605695 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0329 17:17:27.404160 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0329 17:17:38.256393 1 request.go:655] Throttling request took 1.048504725s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0329 17:17:39.107949 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0329 17:17:57.905913 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0329 17:18:10.758380 1 request.go:655] Throttling request took 1.048302488s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0329 17:18:11.609817 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0329 17:18:28.407766 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0329 17:18:43.260189 1 request.go:655] Throttling request took 1.048211559s, request: GET:https://192.168.76.2:8443/apis/authentication.k8s.io/v1beta1?timeout=32s
W0329 17:18:44.111724 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0329 17:18:58.909568 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0329 17:19:15.762255 1 request.go:655] Throttling request took 1.048514216s, request: GET:https://192.168.76.2:8443/apis/storage.k8s.io/v1beta1?timeout=32s
W0329 17:19:16.613639 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0329 17:19:29.411335 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0329 17:19:48.264033 1 request.go:655] Throttling request took 1.04835211s, request: GET:https://192.168.76.2:8443/apis/rbac.authorization.k8s.io/v1?timeout=32s
W0329 17:19:49.115581 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0329 17:19:59.913139 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0329 17:20:20.766861 1 request.go:655] Throttling request took 1.039767951s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0329 17:20:21.617360 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
==> kube-proxy [0c2204880506] <==
I0329 17:14:39.356780 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I0329 17:14:39.357092 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W0329 17:14:39.390816 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0329 17:14:39.391115 1 server_others.go:185] Using iptables Proxier.
I0329 17:14:39.391465 1 server.go:650] Version: v1.20.0
I0329 17:14:39.392570 1 config.go:315] Starting service config controller
I0329 17:14:39.408247 1 shared_informer.go:240] Waiting for caches to sync for service config
I0329 17:14:39.408447 1 shared_informer.go:247] Caches are synced for service config
I0329 17:14:39.394408 1 config.go:224] Starting endpoint slice config controller
I0329 17:14:39.408641 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0329 17:14:39.508852 1 shared_informer.go:247] Caches are synced for endpoint slice config
==> kube-proxy [e74725ac4203] <==
I0329 17:12:22.949825 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I0329 17:12:22.949922 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W0329 17:12:23.127641 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0329 17:12:23.127741 1 server_others.go:185] Using iptables Proxier.
I0329 17:12:23.127957 1 server.go:650] Version: v1.20.0
I0329 17:12:23.128755 1 config.go:315] Starting service config controller
I0329 17:12:23.128773 1 shared_informer.go:240] Waiting for caches to sync for service config
I0329 17:12:23.128789 1 config.go:224] Starting endpoint slice config controller
I0329 17:12:23.128792 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0329 17:12:23.230668 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0329 17:12:23.230949 1 shared_informer.go:247] Caches are synced for service config
==> kube-scheduler [99605fcde49b] <==
I0329 17:14:28.211095 1 serving.go:331] Generated self-signed cert in-memory
W0329 17:14:35.411348 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0329 17:14:35.411382 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0329 17:14:35.411394 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0329 17:14:35.411404 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0329 17:14:35.716270 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0329 17:14:35.729566 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0329 17:14:35.729589 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0329 17:14:35.752890 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0329 17:14:35.954761 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kube-scheduler [f2fc2725f63d] <==
W0329 17:12:03.001489 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0329 17:12:03.001673 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0329 17:12:03.001797 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0329 17:12:03.067489 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0329 17:12:03.069153 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0329 17:12:03.069374 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0329 17:12:03.069967 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0329 17:12:03.078880 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0329 17:12:03.081924 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0329 17:12:03.082262 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0329 17:12:03.082502 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0329 17:12:03.082802 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0329 17:12:03.084345 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0329 17:12:03.084829 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 17:12:03.085256 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0329 17:12:03.085580 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0329 17:12:03.086722 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0329 17:12:03.086982 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0329 17:12:03.087758 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0329 17:12:03.957661 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0329 17:12:04.140800 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0329 17:12:04.142843 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0329 17:12:04.152756 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0329 17:12:04.243792 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0329 17:12:07.269631 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Mar 29 17:17:46 old-k8s-version-551944 kubelet[1457]: E0329 17:17:46.831876 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
Mar 29 17:17:53 old-k8s-version-551944 kubelet[1457]: E0329 17:17:53.260430 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 29 17:17:59 old-k8s-version-551944 kubelet[1457]: E0329 17:17:59.260070 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Mar 29 17:18:04 old-k8s-version-551944 kubelet[1457]: E0329 17:18:04.260476 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 29 17:18:13 old-k8s-version-551944 kubelet[1457]: E0329 17:18:13.266456 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Mar 29 17:18:15 old-k8s-version-551944 kubelet[1457]: E0329 17:18:15.260277 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 29 17:18:25 old-k8s-version-551944 kubelet[1457]: E0329 17:18:25.267475 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Mar 29 17:18:27 old-k8s-version-551944 kubelet[1457]: E0329 17:18:27.260134 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 29 17:18:37 old-k8s-version-551944 kubelet[1457]: E0329 17:18:37.260145 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Mar 29 17:18:42 old-k8s-version-551944 kubelet[1457]: E0329 17:18:42.261218 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 29 17:18:50 old-k8s-version-551944 kubelet[1457]: E0329 17:18:50.260639 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Mar 29 17:18:57 old-k8s-version-551944 kubelet[1457]: E0329 17:18:57.260263 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 29 17:19:05 old-k8s-version-551944 kubelet[1457]: E0329 17:19:05.260268 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Mar 29 17:19:09 old-k8s-version-551944 kubelet[1457]: E0329 17:19:09.260200 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 29 17:19:20 old-k8s-version-551944 kubelet[1457]: E0329 17:19:20.268357 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Mar 29 17:19:22 old-k8s-version-551944 kubelet[1457]: E0329 17:19:22.264762 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 29 17:19:33 old-k8s-version-551944 kubelet[1457]: E0329 17:19:33.260820 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Mar 29 17:19:37 old-k8s-version-551944 kubelet[1457]: E0329 17:19:37.260349 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 29 17:19:45 old-k8s-version-551944 kubelet[1457]: E0329 17:19:45.261516 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Mar 29 17:19:50 old-k8s-version-551944 kubelet[1457]: E0329 17:19:50.260407 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 29 17:19:59 old-k8s-version-551944 kubelet[1457]: E0329 17:19:59.260075 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Mar 29 17:20:03 old-k8s-version-551944 kubelet[1457]: E0329 17:20:03.260216 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 29 17:20:14 old-k8s-version-551944 kubelet[1457]: E0329 17:20:14.260174 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Mar 29 17:20:16 old-k8s-version-551944 kubelet[1457]: E0329 17:20:16.260079 1457 pod_workers.go:191] Error syncing pod b645d486-edb6-4c8c-b9db-0d8ed91dd08e ("metrics-server-9975d5f86-wrjnc_kube-system(b645d486-edb6-4c8c-b9db-0d8ed91dd08e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Mar 29 17:20:25 old-k8s-version-551944 kubelet[1457]: E0329 17:20:25.271337 1457 pod_workers.go:191] Error syncing pod 6b0dfc98-b67c-4920-a215-7a9699d8ecec ("dashboard-metrics-scraper-8d5bb5db8-r8mng_kubernetes-dashboard(6b0dfc98-b67c-4920-a215-7a9699d8ecec)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
==> kubernetes-dashboard [ab0ca8585388] <==
2025/03/29 17:15:00 Using namespace: kubernetes-dashboard
2025/03/29 17:15:00 Using in-cluster config to connect to apiserver
2025/03/29 17:15:00 Using secret token for csrf signing
2025/03/29 17:15:00 Initializing csrf token from kubernetes-dashboard-csrf secret
2025/03/29 17:15:00 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2025/03/29 17:15:00 Successful initial request to the apiserver, version: v1.20.0
2025/03/29 17:15:00 Generating JWE encryption key
2025/03/29 17:15:00 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2025/03/29 17:15:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2025/03/29 17:15:01 Initializing JWE encryption key from synchronized object
2025/03/29 17:15:01 Creating in-cluster Sidecar client
2025/03/29 17:15:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/03/29 17:15:01 Serving insecurely on HTTP port: 9090
2025/03/29 17:15:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/03/29 17:16:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/03/29 17:16:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/03/29 17:17:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/03/29 17:17:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/03/29 17:18:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/03/29 17:18:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/03/29 17:19:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/03/29 17:19:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/03/29 17:20:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/03/29 17:15:00 Starting overwatch
==> storage-provisioner [766ed8e00c6e] <==
I0329 17:14:39.064021 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0329 17:15:09.066664 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
==> storage-provisioner [b4f0beb7b2eb] <==
I0329 17:15:23.535367 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0329 17:15:23.582400 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0329 17:15:23.582683 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0329 17:15:41.086980 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0329 17:15:41.109214 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-551944_0f895890-340c-4402-92ce-21b64c92c109!
I0329 17:15:41.106449 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0ba32d94-93ce-46f6-bcfa-f4bc1e17b2dd", APIVersion:"v1", ResourceVersion:"805", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-551944_0f895890-340c-4402-92ce-21b64c92c109 became leader
I0329 17:15:41.209879 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-551944_0f895890-340c-4402-92ce-21b64c92c109!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-551944 -n old-k8s-version-551944
helpers_test.go:261: (dbg) Run: kubectl --context old-k8s-version-551944 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-wrjnc dashboard-metrics-scraper-8d5bb5db8-r8mng
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context old-k8s-version-551944 describe pod metrics-server-9975d5f86-wrjnc dashboard-metrics-scraper-8d5bb5db8-r8mng
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-551944 describe pod metrics-server-9975d5f86-wrjnc dashboard-metrics-scraper-8d5bb5db8-r8mng: exit status 1 (108.535523ms)
** stderr **
Error from server (NotFound): pods "metrics-server-9975d5f86-wrjnc" not found
Error from server (NotFound): pods "dashboard-metrics-scraper-8d5bb5db8-r8mng" not found
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-551944 describe pod metrics-server-9975d5f86-wrjnc dashboard-metrics-scraper-8d5bb5db8-r8mng: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (378.48s)