=== RUN TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run: out/minikube-linux-amd64 start -p old-k8s-version-694015 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2 --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-694015 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2 --kubernetes-version=v1.16.0: exit status 80 (15m31.251483455s)
-- stdout --
* [old-k8s-version-694015] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=17297
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/17297-6032/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/17297-6032/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
* Using the kvm2 driver based on existing profile
* Starting control plane node old-k8s-version-694015 in cluster old-k8s-version-694015
* Restarting existing kvm2 VM for "old-k8s-version-694015" ...
* Preparing Kubernetes v1.16.0 on Docker 24.0.6 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
- Using image docker.io/kubernetesui/dashboard:v2.7.0
- Using image registry.k8s.io/echoserver:1.4
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Verifying Kubernetes components...
- Using image fake.domain/registry.k8s.io/echoserver:1.4
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-694015 addons enable metrics-server
* Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
-- /stdout --
** stderr **
I0925 11:24:40.587662 57426 out.go:296] Setting OutFile to fd 1 ...
I0925 11:24:40.587801 57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 11:24:40.587813 57426 out.go:309] Setting ErrFile to fd 2...
I0925 11:24:40.587820 57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 11:24:40.588100 57426 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17297-6032/.minikube/bin
I0925 11:24:40.588816 57426 out.go:303] Setting JSON to false
I0925 11:24:40.590066 57426 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":4032,"bootTime":1695637049,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0925 11:24:40.590144 57426 start.go:138] virtualization: kvm guest
I0925 11:24:40.592274 57426 out.go:177] * [old-k8s-version-694015] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
I0925 11:24:40.594623 57426 out.go:177] - MINIKUBE_LOCATION=17297
I0925 11:24:40.596436 57426 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0925 11:24:40.594591 57426 notify.go:220] Checking for updates...
I0925 11:24:40.598264 57426 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/17297-6032/kubeconfig
I0925 11:24:40.599930 57426 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/17297-6032/.minikube
I0925 11:24:40.601598 57426 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0925 11:24:40.603255 57426 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0925 11:24:40.605387 57426 config.go:182] Loaded profile config "old-k8s-version-694015": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
I0925 11:24:40.606018 57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:24:40.606071 57426 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:24:40.626954 57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38367
I0925 11:24:40.628060 57426 main.go:141] libmachine: () Calling .GetVersion
I0925 11:24:40.628684 57426 main.go:141] libmachine: Using API Version 1
I0925 11:24:40.628740 57426 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:24:40.629148 57426 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:24:40.629378 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
I0925 11:24:40.631543 57426 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
I0925 11:24:40.633238 57426 driver.go:373] Setting default libvirt URI to qemu:///system
I0925 11:24:40.633674 57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:24:40.633745 57426 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:24:40.649026 57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43633
I0925 11:24:40.649692 57426 main.go:141] libmachine: () Calling .GetVersion
I0925 11:24:40.650276 57426 main.go:141] libmachine: Using API Version 1
I0925 11:24:40.650328 57426 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:24:40.650641 57426 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:24:40.650833 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
I0925 11:24:40.690486 57426 out.go:177] * Using the kvm2 driver based on existing profile
I0925 11:24:40.691928 57426 start.go:298] selected driver: kvm2
I0925 11:24:40.691940 57426 start.go:902] validating driver "kvm2" against &{Name:old-k8s-version-694015 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-694015 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.17 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPort
s:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
I0925 11:24:40.692057 57426 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0925 11:24:40.692693 57426 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0925 11:24:40.692779 57426 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17297-6032/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0925 11:24:40.707177 57426 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
I0925 11:24:40.707636 57426 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0925 11:24:40.707677 57426 cni.go:84] Creating CNI manager for ""
I0925 11:24:40.707702 57426 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I0925 11:24:40.707715 57426 start_flags.go:321] config:
{Name:old-k8s-version-694015 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-694015 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.17 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
I0925 11:24:40.707942 57426 iso.go:125] acquiring lock: {Name:mkb9e2f6e1d5a2b50ee182236ae1b19ef3677829 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0925 11:24:40.710861 57426 out.go:177] * Starting control plane node old-k8s-version-694015 in cluster old-k8s-version-694015
I0925 11:24:40.712423 57426 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
I0925 11:24:40.712460 57426 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17297-6032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
I0925 11:24:40.712472 57426 cache.go:57] Caching tarball of preloaded images
I0925 11:24:40.712562 57426 preload.go:174] Found /home/jenkins/minikube-integration/17297-6032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0925 11:24:40.712577 57426 cache.go:60] Finished verifying existence of preloaded tar for v1.16.0 on docker
I0925 11:24:40.712708 57426 profile.go:148] Saving config to /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/old-k8s-version-694015/config.json ...
I0925 11:24:40.712889 57426 start.go:365] acquiring machines lock for old-k8s-version-694015: {Name:mk02fb3d97d6ed60b07ca18d96424c593d1bb8d0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0925 11:24:40.712934 57426 start.go:369] acquired machines lock for "old-k8s-version-694015" in 24.9µs
I0925 11:24:40.712951 57426 start.go:96] Skipping create...Using existing machine configuration
I0925 11:24:40.712964 57426 fix.go:54] fixHost starting:
I0925 11:24:40.713244 57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:24:40.713271 57426 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:24:40.727190 57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34189
I0925 11:24:40.727613 57426 main.go:141] libmachine: () Calling .GetVersion
I0925 11:24:40.728064 57426 main.go:141] libmachine: Using API Version 1
I0925 11:24:40.728087 57426 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:24:40.728504 57426 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:24:40.728754 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
I0925 11:24:40.728912 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetState
I0925 11:24:40.730893 57426 fix.go:102] recreateIfNeeded on old-k8s-version-694015: state=Stopped err=<nil>
I0925 11:24:40.730919 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
W0925 11:24:40.731114 57426 fix.go:128] unexpected machine state, will restart: <nil>
I0925 11:24:40.733151 57426 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-694015" ...
I0925 11:24:40.734539 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Start
I0925 11:24:40.734798 57426 main.go:141] libmachine: (old-k8s-version-694015) Ensuring networks are active...
I0925 11:24:40.736933 57426 main.go:141] libmachine: (old-k8s-version-694015) Ensuring network default is active
I0925 11:24:40.737407 57426 main.go:141] libmachine: (old-k8s-version-694015) Ensuring network mk-old-k8s-version-694015 is active
I0925 11:24:40.737983 57426 main.go:141] libmachine: (old-k8s-version-694015) Getting domain xml...
I0925 11:24:40.738815 57426 main.go:141] libmachine: (old-k8s-version-694015) Creating domain...
I0925 11:24:42.307156 57426 main.go:141] libmachine: (old-k8s-version-694015) Waiting to get IP...
I0925 11:24:42.308255 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:24:42.308900 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | unable to find current IP address of domain old-k8s-version-694015 in network mk-old-k8s-version-694015
I0925 11:24:42.309007 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | I0925 11:24:42.308888 57460 retry.go:31] will retry after 222.729566ms: waiting for machine to come up
I0925 11:24:42.533808 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:24:42.534385 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | unable to find current IP address of domain old-k8s-version-694015 in network mk-old-k8s-version-694015
I0925 11:24:42.534423 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | I0925 11:24:42.534337 57460 retry.go:31] will retry after 362.103622ms: waiting for machine to come up
I0925 11:24:42.898185 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:24:42.898750 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | unable to find current IP address of domain old-k8s-version-694015 in network mk-old-k8s-version-694015
I0925 11:24:42.898780 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | I0925 11:24:42.898698 57460 retry.go:31] will retry after 476.874033ms: waiting for machine to come up
I0925 11:24:43.377385 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:24:43.377864 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | unable to find current IP address of domain old-k8s-version-694015 in network mk-old-k8s-version-694015
I0925 11:24:43.377888 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | I0925 11:24:43.377815 57460 retry.go:31] will retry after 439.843301ms: waiting for machine to come up
I0925 11:24:43.819586 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:24:43.820106 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | unable to find current IP address of domain old-k8s-version-694015 in network mk-old-k8s-version-694015
I0925 11:24:43.820129 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | I0925 11:24:43.820067 57460 retry.go:31] will retry after 639.618656ms: waiting for machine to come up
I0925 11:24:44.461710 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:24:44.462257 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | unable to find current IP address of domain old-k8s-version-694015 in network mk-old-k8s-version-694015
I0925 11:24:44.462285 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | I0925 11:24:44.462194 57460 retry.go:31] will retry after 764.340612ms: waiting for machine to come up
I0925 11:24:45.228293 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:24:45.228867 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | unable to find current IP address of domain old-k8s-version-694015 in network mk-old-k8s-version-694015
I0925 11:24:45.228892 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | I0925 11:24:45.228810 57460 retry.go:31] will retry after 795.396761ms: waiting for machine to come up
I0925 11:24:46.025469 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:24:46.025910 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | unable to find current IP address of domain old-k8s-version-694015 in network mk-old-k8s-version-694015
I0925 11:24:46.025952 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | I0925 11:24:46.025891 57460 retry.go:31] will retry after 1.29674171s: waiting for machine to come up
I0925 11:24:47.324945 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:24:47.325583 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | unable to find current IP address of domain old-k8s-version-694015 in network mk-old-k8s-version-694015
I0925 11:24:47.325615 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | I0925 11:24:47.325529 57460 retry.go:31] will retry after 1.518748069s: waiting for machine to come up
I0925 11:24:48.845862 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:24:48.846458 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | unable to find current IP address of domain old-k8s-version-694015 in network mk-old-k8s-version-694015
I0925 11:24:48.846518 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | I0925 11:24:48.846423 57460 retry.go:31] will retry after 1.604353924s: waiting for machine to come up
I0925 11:24:50.452522 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:24:50.453382 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | unable to find current IP address of domain old-k8s-version-694015 in network mk-old-k8s-version-694015
I0925 11:24:50.453412 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | I0925 11:24:50.453324 57460 retry.go:31] will retry after 2.86199606s: waiting for machine to come up
I0925 11:24:53.317639 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:24:53.318141 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | unable to find current IP address of domain old-k8s-version-694015 in network mk-old-k8s-version-694015
I0925 11:24:53.318177 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | I0925 11:24:53.318064 57460 retry.go:31] will retry after 3.10153544s: waiting for machine to come up
I0925 11:24:56.420998 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:24:56.421569 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | unable to find current IP address of domain old-k8s-version-694015 in network mk-old-k8s-version-694015
I0925 11:24:56.421598 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | I0925 11:24:56.421546 57460 retry.go:31] will retry after 2.981021856s: waiting for machine to come up
I0925 11:24:59.405685 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:24:59.406220 57426 main.go:141] libmachine: (old-k8s-version-694015) Found IP for machine: 192.168.50.17
I0925 11:24:59.406248 57426 main.go:141] libmachine: (old-k8s-version-694015) Reserving static IP address...
I0925 11:24:59.406265 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has current primary IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:24:59.406768 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "old-k8s-version-694015", mac: "52:54:00:e6:28:7c", ip: "192.168.50.17"} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
I0925 11:24:59.406802 57426 main.go:141] libmachine: (old-k8s-version-694015) Reserved static IP address: 192.168.50.17
I0925 11:24:59.406820 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | skip adding static IP to network mk-old-k8s-version-694015 - found existing host DHCP lease matching {name: "old-k8s-version-694015", mac: "52:54:00:e6:28:7c", ip: "192.168.50.17"}
I0925 11:24:59.406839 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Getting to WaitForSSH function...
I0925 11:24:59.406867 57426 main.go:141] libmachine: (old-k8s-version-694015) Waiting for SSH to be available...
I0925 11:24:59.408976 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:24:59.409297 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
I0925 11:24:59.409327 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:24:59.409411 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Using SSH client type: external
I0925 11:24:59.409462 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Using SSH private key: /home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa (-rw-------)
I0925 11:24:59.409503 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.17 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa -p 22] /usr/bin/ssh <nil>}
I0925 11:24:59.409523 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | About to run SSH command:
I0925 11:24:59.409539 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | exit 0
I0925 11:24:59.548605 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | SSH cmd err, output: <nil>:
I0925 11:24:59.549006 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetConfigRaw
I0925 11:24:59.549595 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetIP
I0925 11:24:59.552192 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:24:59.552618 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
I0925 11:24:59.552647 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:24:59.552987 57426 profile.go:148] Saving config to /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/old-k8s-version-694015/config.json ...
I0925 11:24:59.553160 57426 machine.go:88] provisioning docker machine ...
I0925 11:24:59.553175 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
I0925 11:24:59.553385 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetMachineName
I0925 11:24:59.553549 57426 buildroot.go:166] provisioning hostname "old-k8s-version-694015"
I0925 11:24:59.553575 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetMachineName
I0925 11:24:59.553713 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
I0925 11:24:59.556121 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:24:59.556490 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
I0925 11:24:59.556520 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:24:59.556726 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
I0925 11:24:59.556879 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
I0925 11:24:59.557011 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
I0925 11:24:59.557173 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
I0925 11:24:59.557338 57426 main.go:141] libmachine: Using SSH client type: native
I0925 11:24:59.557680 57426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.50.17 22 <nil> <nil>}
I0925 11:24:59.557698 57426 main.go:141] libmachine: About to run SSH command:
sudo hostname old-k8s-version-694015 && echo "old-k8s-version-694015" | sudo tee /etc/hostname
I0925 11:24:59.703561 57426 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-694015
I0925 11:24:59.703603 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
I0925 11:24:59.706307 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:24:59.706671 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
I0925 11:24:59.706711 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:24:59.706822 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
I0925 11:24:59.707048 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
I0925 11:24:59.707221 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
I0925 11:24:59.707379 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
I0925 11:24:59.707553 57426 main.go:141] libmachine: Using SSH client type: native
I0925 11:24:59.708033 57426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.50.17 22 <nil> <nil>}
I0925 11:24:59.708065 57426 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-694015' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-694015/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-694015' | sudo tee -a /etc/hosts;
fi
fi
I0925 11:24:59.841494 57426 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0925 11:24:59.841538 57426 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17297-6032/.minikube CaCertPath:/home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17297-6032/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17297-6032/.minikube}
I0925 11:24:59.841568 57426 buildroot.go:174] setting up certificates
I0925 11:24:59.841579 57426 provision.go:83] configureAuth start
I0925 11:24:59.841592 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetMachineName
I0925 11:24:59.841896 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetIP
I0925 11:24:59.844771 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:24:59.845085 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
I0925 11:24:59.845118 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:24:59.845393 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
I0925 11:24:59.847727 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:24:59.848180 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
I0925 11:24:59.848233 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:24:59.848332 57426 provision.go:138] copyHostCerts
I0925 11:24:59.848387 57426 exec_runner.go:144] found /home/jenkins/minikube-integration/17297-6032/.minikube/ca.pem, removing ...
I0925 11:24:59.848397 57426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17297-6032/.minikube/ca.pem
I0925 11:24:59.848463 57426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17297-6032/.minikube/ca.pem (1078 bytes)
I0925 11:24:59.848546 57426 exec_runner.go:144] found /home/jenkins/minikube-integration/17297-6032/.minikube/cert.pem, removing ...
I0925 11:24:59.848556 57426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17297-6032/.minikube/cert.pem
I0925 11:24:59.848580 57426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17297-6032/.minikube/cert.pem (1123 bytes)
I0925 11:24:59.848627 57426 exec_runner.go:144] found /home/jenkins/minikube-integration/17297-6032/.minikube/key.pem, removing ...
I0925 11:24:59.848634 57426 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17297-6032/.minikube/key.pem
I0925 11:24:59.848656 57426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17297-6032/.minikube/key.pem (1679 bytes)
I0925 11:24:59.848728 57426 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17297-6032/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-694015 san=[192.168.50.17 192.168.50.17 localhost 127.0.0.1 minikube old-k8s-version-694015]
I0925 11:25:00.081298 57426 provision.go:172] copyRemoteCerts
I0925 11:25:00.081368 57426 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0925 11:25:00.081389 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
I0925 11:25:00.084399 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:25:00.084826 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
I0925 11:25:00.084858 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:25:00.084992 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
I0925 11:25:00.085180 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
I0925 11:25:00.085351 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
I0925 11:25:00.085503 57426 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa Username:docker}
I0925 11:25:00.183002 57426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0925 11:25:00.209364 57426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
I0925 11:25:00.233825 57426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0925 11:25:00.259218 57426 provision.go:86] duration metric: configureAuth took 417.624647ms
I0925 11:25:00.259249 57426 buildroot.go:189] setting minikube options for container-runtime
I0925 11:25:00.259461 57426 config.go:182] Loaded profile config "old-k8s-version-694015": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
I0925 11:25:00.259489 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
I0925 11:25:00.259745 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
I0925 11:25:00.261859 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:25:00.262253 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
I0925 11:25:00.262282 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:25:00.262406 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
I0925 11:25:00.262594 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
I0925 11:25:00.262757 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
I0925 11:25:00.262928 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
I0925 11:25:00.263085 57426 main.go:141] libmachine: Using SSH client type: native
I0925 11:25:00.263525 57426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.50.17 22 <nil> <nil>}
I0925 11:25:00.263543 57426 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0925 11:25:00.390987 57426 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0925 11:25:00.391008 57426 buildroot.go:70] root file system type: tmpfs
I0925 11:25:00.391096 57426 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0925 11:25:00.391127 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
I0925 11:25:00.394147 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:25:00.394541 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
I0925 11:25:00.394577 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:25:00.394694 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
I0925 11:25:00.394876 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
I0925 11:25:00.395024 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
I0925 11:25:00.395180 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
I0925 11:25:00.395365 57426 main.go:141] libmachine: Using SSH client type: native
I0925 11:25:00.395679 57426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.50.17 22 <nil> <nil>}
I0925 11:25:00.395748 57426 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0925 11:25:00.538360 57426 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0925 11:25:00.538398 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
I0925 11:25:00.541330 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:25:00.541684 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
I0925 11:25:00.541732 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:25:00.541988 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
I0925 11:25:00.542195 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
I0925 11:25:00.542376 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
I0925 11:25:00.542524 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
I0925 11:25:00.542734 57426 main.go:141] libmachine: Using SSH client type: native
I0925 11:25:00.543262 57426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.50.17 22 <nil> <nil>}
I0925 11:25:00.543290 57426 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0925 11:25:01.431723 57426 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0925 11:25:01.431753 57426 machine.go:91] provisioned docker machine in 1.878579847s
I0925 11:25:01.431766 57426 start.go:300] post-start starting for "old-k8s-version-694015" (driver="kvm2")
I0925 11:25:01.431779 57426 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0925 11:25:01.431799 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
I0925 11:25:01.432193 57426 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0925 11:25:01.432230 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
I0925 11:25:01.435233 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:25:01.435611 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
I0925 11:25:01.435643 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:25:01.435778 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
I0925 11:25:01.435966 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
I0925 11:25:01.436127 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
I0925 11:25:01.436275 57426 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa Username:docker}
I0925 11:25:01.540619 57426 ssh_runner.go:195] Run: cat /etc/os-release
I0925 11:25:01.545212 57426 info.go:137] Remote host: Buildroot 2021.02.12
I0925 11:25:01.545237 57426 filesync.go:126] Scanning /home/jenkins/minikube-integration/17297-6032/.minikube/addons for local assets ...
I0925 11:25:01.545315 57426 filesync.go:126] Scanning /home/jenkins/minikube-integration/17297-6032/.minikube/files for local assets ...
I0925 11:25:01.545418 57426 filesync.go:149] local asset: /home/jenkins/minikube-integration/17297-6032/.minikube/files/etc/ssl/certs/132132.pem -> 132132.pem in /etc/ssl/certs
I0925 11:25:01.545526 57426 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0925 11:25:01.554611 57426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/files/etc/ssl/certs/132132.pem --> /etc/ssl/certs/132132.pem (1708 bytes)
I0925 11:25:01.580258 57426 start.go:303] post-start completed in 148.474128ms
I0925 11:25:01.580284 57426 fix.go:56] fixHost completed within 20.867322519s
I0925 11:25:01.580307 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
I0925 11:25:01.583254 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:25:01.583724 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
I0925 11:25:01.583768 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:25:01.583940 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
I0925 11:25:01.584118 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
I0925 11:25:01.584263 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
I0925 11:25:01.584378 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
I0925 11:25:01.584595 57426 main.go:141] libmachine: Using SSH client type: native
I0925 11:25:01.584952 57426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.50.17 22 <nil> <nil>}
I0925 11:25:01.584966 57426 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0925 11:25:01.713860 57426 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695641101.690775078
I0925 11:25:01.713885 57426 fix.go:206] guest clock: 1695641101.690775078
I0925 11:25:01.713895 57426 fix.go:219] Guest: 2023-09-25 11:25:01.690775078 +0000 UTC Remote: 2023-09-25 11:25:01.58028895 +0000 UTC m=+21.033561482 (delta=110.486128ms)
I0925 11:25:01.713933 57426 fix.go:190] guest clock delta is within tolerance: 110.486128ms
I0925 11:25:01.713941 57426 start.go:83] releasing machines lock for "old-k8s-version-694015", held for 21.00099493s
I0925 11:25:01.713974 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
I0925 11:25:01.714233 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetIP
I0925 11:25:01.717127 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:25:01.717478 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
I0925 11:25:01.717511 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:25:01.717663 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
I0925 11:25:01.718160 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
I0925 11:25:01.718312 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
I0925 11:25:01.718388 57426 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0925 11:25:01.718432 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
I0925 11:25:01.718529 57426 ssh_runner.go:195] Run: cat /version.json
I0925 11:25:01.718553 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
I0925 11:25:01.721364 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:25:01.721628 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:25:01.721736 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
I0925 11:25:01.721766 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:25:01.721931 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
I0925 11:25:01.722037 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
I0925 11:25:01.722099 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:25:01.722104 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
I0925 11:25:01.722253 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
I0925 11:25:01.722340 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
I0925 11:25:01.722414 57426 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa Username:docker}
I0925 11:25:01.722485 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
I0925 11:25:01.722621 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
I0925 11:25:01.722755 57426 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa Username:docker}
I0925 11:25:01.847665 57426 ssh_runner.go:195] Run: systemctl --version
I0925 11:25:01.855260 57426 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0925 11:25:01.862482 57426 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0925 11:25:01.862548 57426 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
I0925 11:25:01.875229 57426 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
I0925 11:25:01.897491 57426 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0925 11:25:01.897526 57426 start.go:469] detecting cgroup driver to use...
I0925 11:25:01.897667 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0925 11:25:01.918886 57426 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
I0925 11:25:01.929912 57426 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0925 11:25:01.941679 57426 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0925 11:25:01.941732 57426 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0925 11:25:01.955647 57426 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0925 11:25:01.969463 57426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0925 11:25:01.983215 57426 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0925 11:25:01.996913 57426 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0925 11:25:02.010860 57426 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0925 11:25:02.023730 57426 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0925 11:25:02.035214 57426 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0925 11:25:02.047150 57426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0925 11:25:02.199973 57426 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0925 11:25:02.224251 57426 start.go:469] detecting cgroup driver to use...
I0925 11:25:02.224336 57426 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0925 11:25:02.245450 57426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0925 11:25:02.260076 57426 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0925 11:25:02.284448 57426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0925 11:25:02.302774 57426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0925 11:25:02.322905 57426 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0925 11:25:02.361137 57426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0925 11:25:02.377691 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0925 11:25:02.398134 57426 ssh_runner.go:195] Run: which cri-dockerd
I0925 11:25:02.402981 57426 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0925 11:25:02.414547 57426 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0925 11:25:02.432822 57426 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0925 11:25:02.563375 57426 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0925 11:25:02.706840 57426 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
I0925 11:25:02.706978 57426 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0925 11:25:02.728994 57426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0925 11:25:02.849318 57426 ssh_runner.go:195] Run: sudo systemctl restart docker
I0925 11:25:04.344306 57426 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.494952682s)
I0925 11:25:04.344377 57426 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0925 11:25:04.378626 57426 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0925 11:25:04.413309 57426 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.6 ...
I0925 11:25:04.413355 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetIP
I0925 11:25:04.415927 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:25:04.416288 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
I0925 11:25:04.416329 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:25:04.416513 57426 ssh_runner.go:195] Run: grep 192.168.50.1 host.minikube.internal$ /etc/hosts
I0925 11:25:04.421006 57426 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0925 11:25:04.436069 57426 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
I0925 11:25:04.436130 57426 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0925 11:25:04.457302 57426 docker.go:664] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/kube-apiserver:v1.16.0
k8s.gcr.io/kube-proxy:v1.16.0
k8s.gcr.io/kube-controller-manager:v1.16.0
k8s.gcr.io/kube-scheduler:v1.16.0
k8s.gcr.io/etcd:3.3.15-0
k8s.gcr.io/coredns:1.6.2
gcr.io/k8s-minikube/busybox:1.28.4-glibc
registry.k8s.io/pause:3.1
k8s.gcr.io/pause:3.1
-- /stdout --
I0925 11:25:04.457326 57426 docker.go:670] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
I0925 11:25:04.457370 57426 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0925 11:25:04.466202 57426 ssh_runner.go:195] Run: which lz4
I0925 11:25:04.469996 57426 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I0925 11:25:04.474022 57426 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0925 11:25:04.474044 57426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
I0925 11:25:06.107255 57426 docker.go:628] Took 1.637292 seconds to copy over tarball
I0925 11:25:06.107326 57426 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0925 11:25:08.816016 57426 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.708661547s)
I0925 11:25:08.816052 57426 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0925 11:25:08.850512 57426 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0925 11:25:08.859144 57426 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3100 bytes)
I0925 11:25:08.875250 57426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0925 11:25:08.979616 57426 ssh_runner.go:195] Run: sudo systemctl restart docker
I0925 11:25:10.698985 57426 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.719331571s)
I0925 11:25:10.699077 57426 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0925 11:25:10.721016 57426 docker.go:664] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/kube-apiserver:v1.16.0
k8s.gcr.io/kube-controller-manager:v1.16.0
k8s.gcr.io/kube-proxy:v1.16.0
k8s.gcr.io/kube-scheduler:v1.16.0
k8s.gcr.io/etcd:3.3.15-0
k8s.gcr.io/coredns:1.6.2
gcr.io/k8s-minikube/busybox:1.28.4-glibc
k8s.gcr.io/pause:3.1
registry.k8s.io/pause:3.1
-- /stdout --
I0925 11:25:10.721043 57426 docker.go:670] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
I0925 11:25:10.721053 57426 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
I0925 11:25:10.722442 57426 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
I0925 11:25:10.722491 57426 image.go:134] retrieving image: registry.k8s.io/pause:3.1
I0925 11:25:10.722454 57426 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
I0925 11:25:10.722454 57426 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I0925 11:25:10.722455 57426 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
I0925 11:25:10.722460 57426 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
I0925 11:25:10.722480 57426 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
I0925 11:25:10.722482 57426 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
I0925 11:25:10.723053 57426 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
I0925 11:25:10.723206 57426 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I0925 11:25:10.723233 57426 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
I0925 11:25:10.723284 57426 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
I0925 11:25:10.723291 57426 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
I0925 11:25:10.723294 57426 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
I0925 11:25:10.723284 57426 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
I0925 11:25:10.723727 57426 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
I0925 11:25:10.885160 57426 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
I0925 11:25:10.886038 57426 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
I0925 11:25:10.886075 57426 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
I0925 11:25:10.901732 57426 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
I0925 11:25:10.910884 57426 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
I0925 11:25:10.922280 57426 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
I0925 11:25:10.922280 57426 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
I0925 11:25:10.935346 57426 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
I0925 11:25:10.935395 57426 docker.go:317] Removing image: registry.k8s.io/etcd:3.3.15-0
I0925 11:25:10.935441 57426 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
I0925 11:25:10.948420 57426 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
I0925 11:25:10.948528 57426 docker.go:317] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
I0925 11:25:10.948434 57426 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
I0925 11:25:10.948624 57426 docker.go:317] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
I0925 11:25:10.948693 57426 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
I0925 11:25:10.948579 57426 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
I0925 11:25:10.988590 57426 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
I0925 11:25:10.988640 57426 docker.go:317] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
I0925 11:25:10.988694 57426 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
I0925 11:25:10.991956 57426 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
I0925 11:25:10.992011 57426 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
I0925 11:25:10.992039 57426 docker.go:317] Removing image: registry.k8s.io/kube-proxy:v1.16.0
I0925 11:25:10.992050 57426 docker.go:317] Removing image: registry.k8s.io/coredns:1.6.2
I0925 11:25:10.992087 57426 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
I0925 11:25:10.992119 57426 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
I0925 11:25:10.992120 57426 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
I0925 11:25:11.015899 57426 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
I0925 11:25:11.022253 57426 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
I0925 11:25:11.035117 57426 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
I0925 11:25:11.045414 57426 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
I0925 11:25:11.045501 57426 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
I0925 11:25:11.348790 57426 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
I0925 11:25:11.374133 57426 cache_images.go:92] LoadImages completed in 653.062439ms
W0925 11:25:11.374241 57426 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
I0925 11:25:11.374312 57426 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0925 11:25:11.405963 57426 cni.go:84] Creating CNI manager for ""
I0925 11:25:11.405993 57426 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I0925 11:25:11.406013 57426 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0925 11:25:11.406037 57426 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.17 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-694015 NodeName:old-k8s-version-694015 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.17"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.17 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0925 11:25:11.406231 57426 kubeadm.go:181] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.50.17
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "old-k8s-version-694015"
kubeletExtraArgs:
node-ip: 192.168.50.17
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.50.17"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: old-k8s-version-694015
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.17:2381
kubernetesVersion: v1.16.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0925 11:25:11.406343 57426 kubeadm.go:976] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-694015 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.17
[Install]
config:
{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-694015 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0925 11:25:11.406419 57426 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
I0925 11:25:11.416154 57426 binaries.go:44] Found k8s binaries, skipping transfer
I0925 11:25:11.416229 57426 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0925 11:25:11.426088 57426 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (349 bytes)
I0925 11:25:11.443617 57426 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0925 11:25:11.461066 57426 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2178 bytes)
I0925 11:25:11.477277 57426 ssh_runner.go:195] Run: grep 192.168.50.17 control-plane.minikube.internal$ /etc/hosts
I0925 11:25:11.481098 57426 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.17 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0925 11:25:11.492472 57426 certs.go:56] Setting up /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/old-k8s-version-694015 for IP: 192.168.50.17
I0925 11:25:11.492519 57426 certs.go:190] acquiring lock for shared ca certs: {Name:mkb77fd8e605e52ea68ab5351af7de9da389c0d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0925 11:25:11.492715 57426 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17297-6032/.minikube/ca.key
I0925 11:25:11.492775 57426 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17297-6032/.minikube/proxy-client-ca.key
I0925 11:25:11.492891 57426 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/old-k8s-version-694015/client.key
I0925 11:25:11.492969 57426 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/old-k8s-version-694015/apiserver.key.6142b612
I0925 11:25:11.493032 57426 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/old-k8s-version-694015/proxy-client.key
I0925 11:25:11.493176 57426 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/home/jenkins/minikube-integration/17297-6032/.minikube/certs/13213.pem (1338 bytes)
W0925 11:25:11.493218 57426 certs.go:433] ignoring /home/jenkins/minikube-integration/17297-6032/.minikube/certs/home/jenkins/minikube-integration/17297-6032/.minikube/certs/13213_empty.pem, impossibly tiny 0 bytes
I0925 11:25:11.493234 57426 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca-key.pem (1675 bytes)
I0925 11:25:11.493273 57426 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca.pem (1078 bytes)
I0925 11:25:11.493311 57426 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/home/jenkins/minikube-integration/17297-6032/.minikube/certs/cert.pem (1123 bytes)
I0925 11:25:11.493347 57426 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/home/jenkins/minikube-integration/17297-6032/.minikube/certs/key.pem (1679 bytes)
I0925 11:25:11.493409 57426 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-6032/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17297-6032/.minikube/files/etc/ssl/certs/132132.pem (1708 bytes)
I0925 11:25:11.494801 57426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/old-k8s-version-694015/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0925 11:25:11.522161 57426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/old-k8s-version-694015/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0925 11:25:11.549159 57426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/old-k8s-version-694015/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0925 11:25:11.575972 57426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/old-k8s-version-694015/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0925 11:25:11.597528 57426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0925 11:25:11.619284 57426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0925 11:25:11.642480 57426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0925 11:25:11.665449 57426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0925 11:25:11.687812 57426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/files/etc/ssl/certs/132132.pem --> /usr/share/ca-certificates/132132.pem (1708 bytes)
I0925 11:25:11.711371 57426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0925 11:25:11.735934 57426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/certs/13213.pem --> /usr/share/ca-certificates/13213.pem (1338 bytes)
I0925 11:25:11.757797 57426 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0925 11:25:11.773891 57426 ssh_runner.go:195] Run: openssl version
I0925 11:25:11.779561 57426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/132132.pem && ln -fs /usr/share/ca-certificates/132132.pem /etc/ssl/certs/132132.pem"
I0925 11:25:11.790731 57426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/132132.pem
I0925 11:25:11.796032 57426 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 25 10:38 /usr/share/ca-certificates/132132.pem
I0925 11:25:11.796080 57426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/132132.pem
I0925 11:25:11.801704 57426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/132132.pem /etc/ssl/certs/3ec20f2e.0"
I0925 11:25:11.813138 57426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0925 11:25:11.823852 57426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0925 11:25:11.828441 57426 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 25 10:34 /usr/share/ca-certificates/minikubeCA.pem
I0925 11:25:11.828493 57426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0925 11:25:11.834206 57426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0925 11:25:11.845200 57426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13213.pem && ln -fs /usr/share/ca-certificates/13213.pem /etc/ssl/certs/13213.pem"
I0925 11:25:11.858934 57426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13213.pem
I0925 11:25:11.864927 57426 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 25 10:38 /usr/share/ca-certificates/13213.pem
I0925 11:25:11.864974 57426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13213.pem
I0925 11:25:11.871976 57426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13213.pem /etc/ssl/certs/51391683.0"
I0925 11:25:11.885846 57426 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
I0925 11:25:11.890495 57426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0925 11:25:11.896654 57426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0925 11:25:11.902657 57426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0925 11:25:11.908626 57426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0925 11:25:11.914386 57426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0925 11:25:11.920901 57426 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0925 11:25:11.927115 57426 kubeadm.go:404] StartCluster: {Name:old-k8s-version-694015 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-694015 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.17 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress
: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
I0925 11:25:11.927268 57426 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0925 11:25:11.949369 57426 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0925 11:25:11.961069 57426 kubeadm.go:419] found existing configuration files, will attempt cluster restart
I0925 11:25:11.961093 57426 kubeadm.go:636] restartCluster start
I0925 11:25:11.961142 57426 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0925 11:25:11.971923 57426 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0925 11:25:11.972450 57426 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-694015" does not appear in /home/jenkins/minikube-integration/17297-6032/kubeconfig
I0925 11:25:11.972749 57426 kubeconfig.go:146] "old-k8s-version-694015" context is missing from /home/jenkins/minikube-integration/17297-6032/kubeconfig - will repair!
I0925 11:25:11.973200 57426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-6032/kubeconfig: {Name:mk2e6cdf75b548522ce59dabb15b91a1d0336907 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0925 11:25:11.974796 57426 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0925 11:25:11.983812 57426 api_server.go:166] Checking apiserver status ...
I0925 11:25:11.983855 57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:25:11.994861 57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:25:11.994887 57426 api_server.go:166] Checking apiserver status ...
I0925 11:25:11.994937 57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:25:12.005652 57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:25:12.506376 57426 api_server.go:166] Checking apiserver status ...
I0925 11:25:12.506455 57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:25:12.520081 57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:25:13.006631 57426 api_server.go:166] Checking apiserver status ...
I0925 11:25:13.006695 57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:25:13.019568 57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:25:13.505914 57426 api_server.go:166] Checking apiserver status ...
I0925 11:25:13.506006 57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:25:13.518385 57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:25:14.005809 57426 api_server.go:166] Checking apiserver status ...
I0925 11:25:14.005874 57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:25:14.019345 57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:25:14.505870 57426 api_server.go:166] Checking apiserver status ...
I0925 11:25:14.505971 57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:25:14.519278 57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:25:15.005761 57426 api_server.go:166] Checking apiserver status ...
I0925 11:25:15.005847 57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:25:15.019304 57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:25:15.505775 57426 api_server.go:166] Checking apiserver status ...
I0925 11:25:15.505861 57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:25:15.522069 57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:25:16.006204 57426 api_server.go:166] Checking apiserver status ...
I0925 11:25:16.006301 57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:25:16.019867 57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:25:16.506529 57426 api_server.go:166] Checking apiserver status ...
I0925 11:25:16.506617 57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:25:16.518437 57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:25:17.006003 57426 api_server.go:166] Checking apiserver status ...
I0925 11:25:17.006072 57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:25:17.017665 57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:25:17.506193 57426 api_server.go:166] Checking apiserver status ...
I0925 11:25:17.506270 57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:25:17.518866 57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:25:18.006479 57426 api_server.go:166] Checking apiserver status ...
I0925 11:25:18.006549 57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:25:18.018134 57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:25:18.506718 57426 api_server.go:166] Checking apiserver status ...
I0925 11:25:18.506779 57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:25:18.518368 57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:25:19.005863 57426 api_server.go:166] Checking apiserver status ...
I0925 11:25:19.005914 57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:25:19.019889 57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:25:19.506525 57426 api_server.go:166] Checking apiserver status ...
I0925 11:25:19.506610 57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:25:19.518123 57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:25:20.006750 57426 api_server.go:166] Checking apiserver status ...
I0925 11:25:20.006834 57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:25:20.018691 57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:25:20.505853 57426 api_server.go:166] Checking apiserver status ...
I0925 11:25:20.505944 57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:25:20.518163 57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:25:21.005743 57426 api_server.go:166] Checking apiserver status ...
I0925 11:25:21.005799 57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:25:21.018421 57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:25:21.505927 57426 api_server.go:166] Checking apiserver status ...
I0925 11:25:21.505992 57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:25:21.518395 57426 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:25:21.984233 57426 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
I0925 11:25:21.984268 57426 kubeadm.go:1128] stopping kube-system containers ...
I0925 11:25:21.984338 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0925 11:25:22.006278 57426 docker.go:463] Stopping containers: [6fc1a53ec6fe fd5a5b49ebb6 ae4bcf7dc2cb da81a748f8c6 18341e03937a c198cace2d43 2ea2541ac22c 4fbe3df9792c 8cd0717575c9 eedc3bc3189c c5ece3832a65 1b6622ab649f 8a8af2658d58 7aba7a4dd998]
I0925 11:25:22.006354 57426 ssh_runner.go:195] Run: docker stop 6fc1a53ec6fe fd5a5b49ebb6 ae4bcf7dc2cb da81a748f8c6 18341e03937a c198cace2d43 2ea2541ac22c 4fbe3df9792c 8cd0717575c9 eedc3bc3189c c5ece3832a65 1b6622ab649f 8a8af2658d58 7aba7a4dd998
I0925 11:25:22.030284 57426 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0925 11:25:22.048892 57426 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0925 11:25:22.058675 57426 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0925 11:25:22.058725 57426 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0925 11:25:22.069869 57426 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0925 11:25:22.069887 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0925 11:25:22.203346 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0925 11:25:23.343648 57426 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.140263014s)
I0925 11:25:23.343682 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0925 11:25:23.609027 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0925 11:25:23.759944 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0925 11:25:23.877711 57426 api_server.go:52] waiting for apiserver process to appear ...
I0925 11:25:23.877795 57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0925 11:25:23.894065 57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0925 11:25:24.409145 57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0925 11:25:24.909264 57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0925 11:25:25.409155 57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0925 11:25:25.908595 57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0925 11:25:25.941174 57426 api_server.go:72] duration metric: took 2.063462682s to wait for apiserver process to appear ...
I0925 11:25:25.941202 57426 api_server.go:88] waiting for apiserver healthz status ...
I0925 11:25:25.941221 57426 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
I0925 11:25:30.814959 57426 api_server.go:279] https://192.168.50.17:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0925 11:25:30.814986 57426 api_server.go:103] status: https://192.168.50.17:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0925 11:25:30.814998 57426 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
I0925 11:25:30.848727 57426 api_server.go:279] https://192.168.50.17:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
W0925 11:25:30.848763 57426 api_server.go:103] status: https://192.168.50.17:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
I0925 11:25:31.349509 57426 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
I0925 11:25:31.387359 57426 api_server.go:279] https://192.168.50.17:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0925 11:25:31.387410 57426 api_server.go:103] status: https://192.168.50.17:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0925 11:25:31.848937 57426 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
I0925 11:25:31.867183 57426 api_server.go:279] https://192.168.50.17:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0925 11:25:31.867218 57426 api_server.go:103] status: https://192.168.50.17:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0925 11:25:32.349854 57426 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
I0925 11:25:32.360469 57426 api_server.go:279] https://192.168.50.17:8443/healthz returned 200:
ok
I0925 11:25:32.369167 57426 api_server.go:141] control plane version: v1.16.0
I0925 11:25:32.369203 57426 api_server.go:131] duration metric: took 6.427991735s to wait for apiserver health ...
I0925 11:25:32.369217 57426 cni.go:84] Creating CNI manager for ""
I0925 11:25:32.369231 57426 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I0925 11:25:32.369242 57426 system_pods.go:43] waiting for kube-system pods to appear ...
I0925 11:25:32.380134 57426 system_pods.go:59] 7 kube-system pods found
I0925 11:25:32.380171 57426 system_pods.go:61] "coredns-5644d7b6d9-5c2wq" [9b690088-7bfd-4691-b173-f4334779d35a] Running
I0925 11:25:32.380184 57426 system_pods.go:61] "etcd-old-k8s-version-694015" [36dee6e4-aeee-4551-9d8b-1ca1bea32994] Running
I0925 11:25:32.380196 57426 system_pods.go:61] "kube-apiserver-old-k8s-version-694015" [90dc280a-6164-49e3-85e7-1c65362aedc4] Running
I0925 11:25:32.380209 57426 system_pods.go:61] "kube-controller-manager-old-k8s-version-694015" [d9517a82-2ba1-4805-b8da-9e5b2ac42e3f] Running
I0925 11:25:32.380217 57426 system_pods.go:61] "kube-proxy-tz4wl" [878e4f41-5b17-43b3-8f64-43a5f3f1b33f] Running
I0925 11:25:32.380225 57426 system_pods.go:61] "kube-scheduler-old-k8s-version-694015" [b9b2adb4-7746-42df-a854-f4c222d53d98] Running
I0925 11:25:32.380236 57426 system_pods.go:61] "storage-provisioner" [ecfa3d77-460f-4a09-b035-18707c06fed3] Running
I0925 11:25:32.380250 57426 system_pods.go:74] duration metric: took 10.9971ms to wait for pod list to return data ...
I0925 11:25:32.380264 57426 node_conditions.go:102] verifying NodePressure condition ...
I0925 11:25:32.394660 57426 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0925 11:25:32.394700 57426 node_conditions.go:123] node cpu capacity is 2
I0925 11:25:32.394715 57426 node_conditions.go:105] duration metric: took 14.439734ms to run NodePressure ...
I0925 11:25:32.394736 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0925 11:25:32.961075 57426 kubeadm.go:772] waiting for restarted kubelet to initialise ...
I0925 11:25:32.965086 57426 retry.go:31] will retry after 188.562442ms: kubelet not initialised
I0925 11:25:33.160477 57426 retry.go:31] will retry after 370.071584ms: kubelet not initialised
I0925 11:25:33.536011 57426 retry.go:31] will retry after 824.663389ms: kubelet not initialised
I0925 11:25:34.365405 57426 retry.go:31] will retry after 810.880807ms: kubelet not initialised
I0925 11:25:35.185131 57426 retry.go:31] will retry after 1.721240677s: kubelet not initialised
I0925 11:25:36.911363 57426 retry.go:31] will retry after 2.193241834s: kubelet not initialised
I0925 11:25:39.112946 57426 retry.go:31] will retry after 1.951980278s: kubelet not initialised
I0925 11:25:41.071011 57426 retry.go:31] will retry after 6.193937978s: kubelet not initialised
I0925 11:25:47.274201 57426 retry.go:31] will retry after 4.606339091s: kubelet not initialised
I0925 11:25:51.885465 57426 retry.go:31] will retry after 8.801943251s: kubelet not initialised
I0925 11:26:00.693610 57426 retry.go:31] will retry after 12.468242279s: kubelet not initialised
I0925 11:26:13.171303 57426 kubeadm.go:787] kubelet initialised
I0925 11:26:13.171330 57426 kubeadm.go:788] duration metric: took 40.21022654s waiting for restarted kubelet to initialise ...
I0925 11:26:13.171339 57426 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0925 11:26:13.179728 57426 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-2mp5v" in "kube-system" namespace to be "Ready" ...
I0925 11:26:13.189191 57426 pod_ready.go:92] pod "coredns-5644d7b6d9-2mp5v" in "kube-system" namespace has status "Ready":"True"
I0925 11:26:13.189214 57426 pod_ready.go:81] duration metric: took 9.450882ms waiting for pod "coredns-5644d7b6d9-2mp5v" in "kube-system" namespace to be "Ready" ...
I0925 11:26:13.189224 57426 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-5c2wq" in "kube-system" namespace to be "Ready" ...
I0925 11:26:13.196774 57426 pod_ready.go:92] pod "coredns-5644d7b6d9-5c2wq" in "kube-system" namespace has status "Ready":"True"
I0925 11:26:13.196799 57426 pod_ready.go:81] duration metric: took 7.568804ms waiting for pod "coredns-5644d7b6d9-5c2wq" in "kube-system" namespace to be "Ready" ...
I0925 11:26:13.196811 57426 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-694015" in "kube-system" namespace to be "Ready" ...
I0925 11:26:13.203653 57426 pod_ready.go:92] pod "etcd-old-k8s-version-694015" in "kube-system" namespace has status "Ready":"True"
I0925 11:26:13.203673 57426 pod_ready.go:81] duration metric: took 6.854302ms waiting for pod "etcd-old-k8s-version-694015" in "kube-system" namespace to be "Ready" ...
I0925 11:26:13.203685 57426 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-694015" in "kube-system" namespace to be "Ready" ...
I0925 11:26:13.210092 57426 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-694015" in "kube-system" namespace has status "Ready":"True"
I0925 11:26:13.210112 57426 pod_ready.go:81] duration metric: took 6.417933ms waiting for pod "kube-apiserver-old-k8s-version-694015" in "kube-system" namespace to be "Ready" ...
I0925 11:26:13.210123 57426 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-694015" in "kube-system" namespace to be "Ready" ...
I0925 11:26:13.566312 57426 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-694015" in "kube-system" namespace has status "Ready":"True"
I0925 11:26:13.566341 57426 pod_ready.go:81] duration metric: took 356.208747ms waiting for pod "kube-controller-manager-old-k8s-version-694015" in "kube-system" namespace to be "Ready" ...
I0925 11:26:13.566354 57426 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tz4wl" in "kube-system" namespace to be "Ready" ...
I0925 11:26:13.966900 57426 pod_ready.go:92] pod "kube-proxy-tz4wl" in "kube-system" namespace has status "Ready":"True"
I0925 11:26:13.966931 57426 pod_ready.go:81] duration metric: took 400.568203ms waiting for pod "kube-proxy-tz4wl" in "kube-system" namespace to be "Ready" ...
I0925 11:26:13.966944 57426 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-694015" in "kube-system" namespace to be "Ready" ...
I0925 11:26:14.366660 57426 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-694015" in "kube-system" namespace has status "Ready":"True"
I0925 11:26:14.366737 57426 pod_ready.go:81] duration metric: took 399.776351ms waiting for pod "kube-scheduler-old-k8s-version-694015" in "kube-system" namespace to be "Ready" ...
I0925 11:26:14.366759 57426 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace to be "Ready" ...
I0925 11:26:16.674664 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:26:19.173958 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:26:21.674537 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:26:23.674786 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:26:25.674931 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:26:27.675303 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:26:29.675699 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:26:32.174922 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:26:34.674412 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:26:36.674708 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:26:39.174788 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:26:41.674981 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:26:44.173921 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:26:46.673916 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:26:49.172901 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:26:51.174245 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:26:53.174435 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:26:55.673610 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:26:57.673747 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:27:00.173135 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:27:02.673309 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:27:04.674279 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:27:06.674799 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:27:08.674858 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:27:11.174786 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:27:13.673493 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:27:15.674090 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:27:18.175688 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:27:20.674888 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:27:22.679772 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:27:25.174721 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:27:27.674564 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:27:30.174086 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:27:32.174464 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:27:34.673511 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:27:36.674414 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:27:39.175305 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:27:41.673238 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:27:43.675950 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:27:46.174549 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:27:48.675418 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:27:51.174891 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:27:53.675016 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:27:56.173958 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:27:58.174407 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:00.174454 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:02.174841 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:04.175287 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:06.674679 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:09.173838 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:11.174091 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:13.174267 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:15.674829 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:18.175095 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:20.674171 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:22.674573 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:25.174611 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:27.673983 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:29.675459 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:32.173159 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:34.672934 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:36.673537 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:38.675023 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:41.172736 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:43.174138 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:45.174205 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:47.176223 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:49.674353 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:52.173594 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:54.173762 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:56.673626 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:58.673704 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:00.674496 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:02.676016 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:04.677117 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:07.173790 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:09.673547 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:12.173257 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:14.673817 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:17.173554 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:19.674607 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:22.173742 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:24.674422 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:27.174742 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:29.673522 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:31.674133 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:34.173962 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:36.175249 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:38.674512 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:41.172242 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:43.173423 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:45.174163 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:47.174974 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:49.673662 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:52.173811 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:54.673161 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:56.674157 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:59.174193 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:01.674624 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:04.179180 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:06.676262 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:09.174330 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:11.175516 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:13.673816 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:14.366919 57426 pod_ready.go:81] duration metric: took 4m0.00014225s waiting for pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace to be "Ready" ...
E0925 11:30:14.366953 57426 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
I0925 11:30:14.366991 57426 pod_ready.go:38] duration metric: took 4m1.195639658s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0925 11:30:14.367015 57426 kubeadm.go:640] restartCluster took 5m2.405916758s
W0925 11:30:14.367083 57426 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
I0925 11:30:14.367112 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0925 11:30:17.424908 57426 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (3.057768249s)
I0925 11:30:17.424975 57426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0925 11:30:17.439514 57426 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0925 11:30:17.449686 57426 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0925 11:30:17.460096 57426 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0925 11:30:17.460147 57426 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
I0925 11:30:17.622252 57426 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0925 11:30:17.662261 57426 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
I0925 11:30:17.759764 57426 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0925 11:30:30.749642 57426 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
I0925 11:30:30.749742 57426 kubeadm.go:322] [preflight] Running pre-flight checks
I0925 11:30:30.749858 57426 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0925 11:30:30.749944 57426 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0925 11:30:30.750021 57426 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0925 11:30:30.750109 57426 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0925 11:30:30.750191 57426 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0925 11:30:30.750247 57426 kubeadm.go:322] [kubelet-start] Activating the kubelet service
I0925 11:30:30.750371 57426 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0925 11:30:30.751913 57426 out.go:204] - Generating certificates and keys ...
I0925 11:30:30.752003 57426 kubeadm.go:322] [certs] Using existing ca certificate authority
I0925 11:30:30.752119 57426 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0925 11:30:30.752232 57426 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0925 11:30:30.752318 57426 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
I0925 11:30:30.752414 57426 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
I0925 11:30:30.752468 57426 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
I0925 11:30:30.752570 57426 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
I0925 11:30:30.752681 57426 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
I0925 11:30:30.752781 57426 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0925 11:30:30.752890 57426 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0925 11:30:30.752940 57426 kubeadm.go:322] [certs] Using the existing "sa" key
I0925 11:30:30.753020 57426 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0925 11:30:30.753090 57426 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0925 11:30:30.753154 57426 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0925 11:30:30.753251 57426 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0925 11:30:30.753324 57426 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0925 11:30:30.753398 57426 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0925 11:30:30.755107 57426 out.go:204] - Booting up control plane ...
I0925 11:30:30.755208 57426 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0925 11:30:30.755334 57426 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0925 11:30:30.755426 57426 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0925 11:30:30.755500 57426 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0925 11:30:30.755652 57426 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0925 11:30:30.755754 57426 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.505077 seconds
I0925 11:30:30.755912 57426 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0925 11:30:30.756083 57426 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
I0925 11:30:30.756182 57426 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
I0925 11:30:30.756384 57426 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-694015 as control-plane by adding the label "node-role.kubernetes.io/master=''"
I0925 11:30:30.756471 57426 kubeadm.go:322] [bootstrap-token] Using token: snq27o.n0f9uw50v17gbayd
I0925 11:30:30.758173 57426 out.go:204] - Configuring RBAC rules ...
I0925 11:30:30.758310 57426 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0925 11:30:30.758487 57426 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0925 11:30:30.758649 57426 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0925 11:30:30.758810 57426 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0925 11:30:30.758962 57426 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0925 11:30:30.759033 57426 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
I0925 11:30:30.759112 57426 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
I0925 11:30:30.759121 57426 kubeadm.go:322]
I0925 11:30:30.759191 57426 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
I0925 11:30:30.759205 57426 kubeadm.go:322]
I0925 11:30:30.759275 57426 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
I0925 11:30:30.759285 57426 kubeadm.go:322]
I0925 11:30:30.759329 57426 kubeadm.go:322] mkdir -p $HOME/.kube
I0925 11:30:30.759379 57426 kubeadm.go:322] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0925 11:30:30.759421 57426 kubeadm.go:322] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0925 11:30:30.759429 57426 kubeadm.go:322]
I0925 11:30:30.759483 57426 kubeadm.go:322] You should now deploy a pod network to the cluster.
I0925 11:30:30.759595 57426 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0925 11:30:30.759689 57426 kubeadm.go:322] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0925 11:30:30.759705 57426 kubeadm.go:322]
I0925 11:30:30.759821 57426 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
I0925 11:30:30.759962 57426 kubeadm.go:322] and service account keys on each node and then running the following as root:
I0925 11:30:30.759977 57426 kubeadm.go:322]
I0925 11:30:30.760084 57426 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token snq27o.n0f9uw50v17gbayd \
I0925 11:30:30.760216 57426 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:455a6e1c2932646abd648adc4fff0ce596b942d8b3bd098a2ef2cb3ea084ab54 \
I0925 11:30:30.760255 57426 kubeadm.go:322] --control-plane
I0925 11:30:30.760264 57426 kubeadm.go:322]
I0925 11:30:30.760361 57426 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
I0925 11:30:30.760370 57426 kubeadm.go:322]
I0925 11:30:30.760469 57426 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token snq27o.n0f9uw50v17gbayd \
I0925 11:30:30.760617 57426 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:455a6e1c2932646abd648adc4fff0ce596b942d8b3bd098a2ef2cb3ea084ab54
I0925 11:30:30.760630 57426 cni.go:84] Creating CNI manager for ""
I0925 11:30:30.760655 57426 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I0925 11:30:30.760693 57426 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0925 11:30:30.760827 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:30.760899 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=1bf6c3d5317028f348e55ea19d261973a6487d3c minikube.k8s.io/name=old-k8s-version-694015 minikube.k8s.io/updated_at=2023_09_25T11_30_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:30.820984 57426 ops.go:34] apiserver oom_adj: -16
I0925 11:30:31.034555 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:31.165894 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:31.768765 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:32.269393 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:32.768687 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:33.269126 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:33.768794 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:34.269149 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:34.769469 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:35.268685 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:35.769384 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:36.269510 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:36.768848 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:37.268799 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:37.769259 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:38.269444 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:38.769081 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:39.269471 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:39.768795 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:40.269215 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:40.768992 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:41.269161 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:41.768782 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:42.269438 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:42.769149 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:43.268490 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:43.768911 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:44.269363 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:44.769428 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:45.268548 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:45.769489 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:46.046613 57426 kubeadm.go:1081] duration metric: took 15.285826285s to wait for elevateKubeSystemPrivileges.
I0925 11:30:46.046655 57426 kubeadm.go:406] StartCluster complete in 5m34.119546847s
I0925 11:30:46.046676 57426 settings.go:142] acquiring lock: {Name:mk372f3d0f6e5777ebfc48341e146821f27f636c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0925 11:30:46.046764 57426 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/17297-6032/kubeconfig
I0925 11:30:46.048206 57426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-6032/kubeconfig: {Name:mk2e6cdf75b548522ce59dabb15b91a1d0336907 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0925 11:30:46.048432 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0925 11:30:46.048574 57426 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
I0925 11:30:46.048644 57426 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-694015"
I0925 11:30:46.048653 57426 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-694015"
I0925 11:30:46.048678 57426 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-694015"
I0925 11:30:46.048687 57426 addons.go:69] Setting dashboard=true in profile "old-k8s-version-694015"
W0925 11:30:46.048690 57426 addons.go:240] addon storage-provisioner should already be in state true
I0925 11:30:46.048698 57426 addons.go:231] Setting addon dashboard=true in "old-k8s-version-694015"
W0925 11:30:46.048709 57426 addons.go:240] addon dashboard should already be in state true
I0925 11:30:46.048720 57426 config.go:182] Loaded profile config "old-k8s-version-694015": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
I0925 11:30:46.048735 57426 host.go:66] Checking if "old-k8s-version-694015" exists ...
I0925 11:30:46.048744 57426 host.go:66] Checking if "old-k8s-version-694015" exists ...
I0925 11:30:46.048818 57426 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-694015"
I0925 11:30:46.048847 57426 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-694015"
W0925 11:30:46.048855 57426 addons.go:240] addon metrics-server should already be in state true
I0925 11:30:46.048680 57426 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-694015"
I0925 11:30:46.048796 57426 cache.go:107] acquiring lock: {Name:mk67fca357e44d730577a3f111223198f60ef976 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0925 11:30:46.048888 57426 host.go:66] Checking if "old-k8s-version-694015" exists ...
I0925 11:30:46.048935 57426 cache.go:115] /home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
I0925 11:30:46.048944 57426 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 153.391µs
I0925 11:30:46.048955 57426 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
I0925 11:30:46.048963 57426 cache.go:87] Successfully saved all images to host disk.
I0925 11:30:46.049135 57426 config.go:182] Loaded profile config "old-k8s-version-694015": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
I0925 11:30:46.049144 57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:30:46.049162 57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:30:46.049168 57426 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:30:46.049183 57426 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:30:46.049247 57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:30:46.049260 57426 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:30:46.049320 57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:30:46.049333 57426 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:30:46.049505 57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:30:46.049555 57426 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:30:46.072180 57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34421
I0925 11:30:46.072238 57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40457
I0925 11:30:46.072269 57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46441
I0925 11:30:46.072356 57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39891
I0925 11:30:46.072357 57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37241
I0925 11:30:46.072696 57426 main.go:141] libmachine: () Calling .GetVersion
I0925 11:30:46.072776 57426 main.go:141] libmachine: () Calling .GetVersion
I0925 11:30:46.072860 57426 main.go:141] libmachine: () Calling .GetVersion
I0925 11:30:46.073344 57426 main.go:141] libmachine: Using API Version 1
I0925 11:30:46.073364 57426 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:30:46.073496 57426 main.go:141] libmachine: Using API Version 1
I0925 11:30:46.073509 57426 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:30:46.073509 57426 main.go:141] libmachine: () Calling .GetVersion
I0925 11:30:46.073756 57426 main.go:141] libmachine: () Calling .GetVersion
I0925 11:30:46.073762 57426 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:30:46.073964 57426 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:30:46.074195 57426 main.go:141] libmachine: Using API Version 1
I0925 11:30:46.074210 57426 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:30:46.074253 57426 main.go:141] libmachine: Using API Version 1
I0925 11:30:46.074286 57426 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:30:46.074439 57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:30:46.074467 57426 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:30:46.074610 57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:30:46.074656 57426 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:30:46.074686 57426 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:30:46.074715 57426 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:30:46.074930 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetState
I0925 11:30:46.075069 57426 main.go:141] libmachine: Using API Version 1
I0925 11:30:46.075101 57426 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:30:46.075234 57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:30:46.075269 57426 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:30:46.075582 57426 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:30:46.075811 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetState
I0925 11:30:46.077659 57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:30:46.077697 57426 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:30:46.094611 57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44233
I0925 11:30:46.097022 57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44765
I0925 11:30:46.097145 57426 main.go:141] libmachine: () Calling .GetVersion
I0925 11:30:46.097460 57426 main.go:141] libmachine: () Calling .GetVersion
I0925 11:30:46.097748 57426 main.go:141] libmachine: Using API Version 1
I0925 11:30:46.097767 57426 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:30:46.098172 57426 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:30:46.098314 57426 main.go:141] libmachine: Using API Version 1
I0925 11:30:46.098333 57426 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:30:46.098564 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetState
I0925 11:30:46.098618 57426 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:30:46.099229 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetState
I0925 11:30:46.101256 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
I0925 11:30:46.103863 57426 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0925 11:30:46.102124 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
I0925 11:30:46.102436 57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43189
I0925 11:30:46.106576 57426 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0925 11:30:46.105560 57426 main.go:141] libmachine: () Calling .GetVersion
I0925 11:30:46.109500 57426 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0925 11:30:46.108220 57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0925 11:30:46.108845 57426 main.go:141] libmachine: Using API Version 1
I0925 11:30:46.110913 57426 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:30:46.110969 57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0925 11:30:46.110985 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
I0925 11:30:46.110999 57426 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0925 11:30:46.111011 57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0925 11:30:46.111024 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
I0925 11:30:46.112450 57426 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:30:46.112637 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
I0925 11:30:46.112839 57426 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0925 11:30:46.112862 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
I0925 11:30:46.115509 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:30:46.115949 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
I0925 11:30:46.115983 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:30:46.116123 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
I0925 11:30:46.116214 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:30:46.116253 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
I0925 11:30:46.116342 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
I0925 11:30:46.116466 57426 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa Username:docker}
I0925 11:30:46.116484 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
I0925 11:30:46.116508 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:30:46.116774 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
I0925 11:30:46.116925 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
I0925 11:30:46.117104 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
I0925 11:30:46.117252 57426 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa Username:docker}
I0925 11:30:46.119073 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:30:46.119413 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
I0925 11:30:46.119430 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:30:46.119685 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
I0925 11:30:46.119854 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
I0925 11:30:46.120011 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
I0925 11:30:46.120148 57426 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa Username:docker}
I0925 11:30:46.127174 57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37771
I0925 11:30:46.127843 57426 main.go:141] libmachine: () Calling .GetVersion
I0925 11:30:46.128399 57426 main.go:141] libmachine: Using API Version 1
I0925 11:30:46.128428 57426 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:30:46.128967 57426 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:30:46.129155 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetState
I0925 11:30:46.129945 57426 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-694015" context rescaled to 1 replicas
I0925 11:30:46.129977 57426 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.17 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I0925 11:30:46.131741 57426 out.go:177] * Verifying Kubernetes components...
I0925 11:30:46.133087 57426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0925 11:30:46.130848 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
I0925 11:30:46.134728 57426 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0925 11:30:46.136080 57426 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0925 11:30:46.136097 57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0925 11:30:46.136115 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
I0925 11:30:46.139231 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:30:46.139692 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
I0925 11:30:46.139718 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:30:46.139957 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
I0925 11:30:46.140113 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
I0925 11:30:46.140252 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
I0925 11:30:46.140377 57426 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa Username:docker}
I0925 11:30:46.147885 57426 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-694015"
W0925 11:30:46.147907 57426 addons.go:240] addon default-storageclass should already be in state true
I0925 11:30:46.147934 57426 host.go:66] Checking if "old-k8s-version-694015" exists ...
I0925 11:30:46.148356 57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:30:46.148384 57426 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:30:46.173474 57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43085
I0925 11:30:46.174243 57426 main.go:141] libmachine: () Calling .GetVersion
I0925 11:30:46.174879 57426 main.go:141] libmachine: Using API Version 1
I0925 11:30:46.174900 57426 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:30:46.176033 57426 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:30:46.176694 57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:30:46.176736 57426 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:30:46.196631 57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43971
I0925 11:30:46.197107 57426 main.go:141] libmachine: () Calling .GetVersion
I0925 11:30:46.197645 57426 main.go:141] libmachine: Using API Version 1
I0925 11:30:46.197665 57426 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:30:46.198067 57426 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:30:46.198270 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetState
I0925 11:30:46.200093 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
I0925 11:30:46.200354 57426 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
I0925 11:30:46.200371 57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0925 11:30:46.200390 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
I0925 11:30:46.203486 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:30:46.203884 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
I0925 11:30:46.203998 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:30:46.204172 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
I0925 11:30:46.204342 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
I0925 11:30:46.204489 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
I0925 11:30:46.204636 57426 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa Username:docker}
I0925 11:30:46.413931 57426 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-694015" to be "Ready" ...
I0925 11:30:46.414008 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.50.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0925 11:30:46.416569 57426 node_ready.go:49] node "old-k8s-version-694015" has status "Ready":"True"
I0925 11:30:46.416586 57426 node_ready.go:38] duration metric: took 2.626333ms waiting for node "old-k8s-version-694015" to be "Ready" ...
I0925 11:30:46.416594 57426 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0925 11:30:46.420795 57426 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace to be "Ready" ...
I0925 11:30:46.484507 57426 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0925 11:30:46.484532 57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0925 11:30:46.532417 57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0925 11:30:46.532443 57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0925 11:30:46.575299 57426 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0925 11:30:46.575317 57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0925 11:30:46.595994 57426 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0925 11:30:46.596018 57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0925 11:30:46.652448 57426 docker.go:664] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/kube-apiserver:v1.16.0
k8s.gcr.io/kube-proxy:v1.16.0
k8s.gcr.io/kube-controller-manager:v1.16.0
k8s.gcr.io/kube-scheduler:v1.16.0
k8s.gcr.io/etcd:3.3.15-0
k8s.gcr.io/coredns:1.6.2
gcr.io/k8s-minikube/busybox:1.28.4-glibc
registry.k8s.io/pause:3.1
k8s.gcr.io/pause:3.1
-- /stdout --
I0925 11:30:46.652473 57426 cache_images.go:84] Images are preloaded, skipping loading
I0925 11:30:46.652480 57426 cache_images.go:262] succeeded pushing to: old-k8s-version-694015
I0925 11:30:46.652483 57426 cache_images.go:263] failed pushing to:
I0925 11:30:46.652504 57426 main.go:141] libmachine: Making call to close driver server
I0925 11:30:46.652518 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
I0925 11:30:46.652957 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
I0925 11:30:46.652963 57426 main.go:141] libmachine: Successfully made call to close driver server
I0925 11:30:46.652991 57426 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 11:30:46.653007 57426 main.go:141] libmachine: Making call to close driver server
I0925 11:30:46.653020 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
I0925 11:30:46.653288 57426 main.go:141] libmachine: Successfully made call to close driver server
I0925 11:30:46.653304 57426 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 11:30:46.705521 57426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0925 11:30:46.707099 57426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0925 11:30:46.712115 57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0925 11:30:46.712134 57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0925 11:30:46.762833 57426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0925 11:30:46.851711 57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0925 11:30:46.851753 57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0925 11:30:47.115165 57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0925 11:30:47.115193 57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0925 11:30:47.386363 57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
I0925 11:30:47.386386 57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0925 11:30:47.610468 57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0925 11:30:47.610490 57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0925 11:30:47.697559 57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0925 11:30:47.697578 57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0925 11:30:47.864150 57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0925 11:30:47.864169 57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0925 11:30:47.915917 57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0925 11:30:47.915945 57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0925 11:30:48.000793 57426 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.50.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.586742998s)
I0925 11:30:48.000836 57426 start.go:923] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
I0925 11:30:48.085411 57426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0925 11:30:48.190617 57426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.485051258s)
I0925 11:30:48.190677 57426 main.go:141] libmachine: Making call to close driver server
I0925 11:30:48.190691 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
I0925 11:30:48.191035 57426 main.go:141] libmachine: Successfully made call to close driver server
I0925 11:30:48.191056 57426 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 11:30:48.191068 57426 main.go:141] libmachine: Making call to close driver server
I0925 11:30:48.191078 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
I0925 11:30:48.192850 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
I0925 11:30:48.192853 57426 main.go:141] libmachine: Successfully made call to close driver server
I0925 11:30:48.192876 57426 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 11:30:48.192885 57426 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-694015"
I0925 11:30:48.465209 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:48.575177 57426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.868034342s)
I0925 11:30:48.575232 57426 main.go:141] libmachine: Making call to close driver server
I0925 11:30:48.575246 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
I0925 11:30:48.575181 57426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.812311763s)
I0925 11:30:48.575317 57426 main.go:141] libmachine: Making call to close driver server
I0925 11:30:48.575328 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
I0925 11:30:48.575540 57426 main.go:141] libmachine: Successfully made call to close driver server
I0925 11:30:48.575560 57426 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 11:30:48.575570 57426 main.go:141] libmachine: Making call to close driver server
I0925 11:30:48.575579 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
I0925 11:30:48.575635 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
I0925 11:30:48.575742 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
I0925 11:30:48.575772 57426 main.go:141] libmachine: Successfully made call to close driver server
I0925 11:30:48.575781 57426 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 11:30:48.575789 57426 main.go:141] libmachine: Making call to close driver server
I0925 11:30:48.575797 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
I0925 11:30:48.575878 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
I0925 11:30:48.575903 57426 main.go:141] libmachine: Successfully made call to close driver server
I0925 11:30:48.575911 57426 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 11:30:48.577345 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
I0925 11:30:48.577384 57426 main.go:141] libmachine: Successfully made call to close driver server
I0925 11:30:48.577406 57426 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 11:30:48.577435 57426 main.go:141] libmachine: Making call to close driver server
I0925 11:30:48.577451 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
I0925 11:30:48.577940 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
I0925 11:30:48.577944 57426 main.go:141] libmachine: Successfully made call to close driver server
I0925 11:30:48.577964 57426 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 11:30:49.298546 57426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.21307781s)
I0925 11:30:49.298606 57426 main.go:141] libmachine: Making call to close driver server
I0925 11:30:49.298628 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
I0925 11:30:49.302266 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
I0925 11:30:49.302272 57426 main.go:141] libmachine: Successfully made call to close driver server
I0925 11:30:49.302307 57426 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 11:30:49.302321 57426 main.go:141] libmachine: Making call to close driver server
I0925 11:30:49.302331 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
I0925 11:30:49.302655 57426 main.go:141] libmachine: Successfully made call to close driver server
I0925 11:30:49.302695 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
I0925 11:30:49.302717 57426 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 11:30:49.304441 57426 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-694015 addons enable metrics-server
I0925 11:30:49.306061 57426 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
I0925 11:30:49.307539 57426 addons.go:502] enable addons completed in 3.258962527s: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
I0925 11:30:50.940378 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:53.436796 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:55.437380 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:57.449840 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:59.938237 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:02.438436 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:04.937614 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:06.937878 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:09.437807 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:11.939073 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:14.437620 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:16.938666 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:19.437732 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:21.938151 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:23.938328 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:26.439526 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:28.937508 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:30.943648 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:33.437428 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:35.438086 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:37.439039 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:39.442448 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:41.937237 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:43.939282 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:46.438561 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:48.938598 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:50.938694 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:52.939141 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:55.438245 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:57.937434 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:00.437596 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:02.437909 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:04.438109 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:06.438145 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:08.938681 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:11.438436 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:13.438614 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:15.938889 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:18.438798 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:20.937670 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:22.938056 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:24.938180 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:26.938537 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:28.938993 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:30.939782 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:33.438287 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:35.438564 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:37.938062 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:40.438394 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:42.439143 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:44.938221 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:46.940247 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:48.940644 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:51.437686 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:53.438013 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:55.438473 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:57.939231 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:00.438636 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:02.937519 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:04.937631 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:07.436605 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:09.437297 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:11.438337 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:13.939288 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:15.940496 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:18.440278 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:20.938819 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:22.939228 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:24.940142 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:27.440968 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:29.937681 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:31.938903 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:34.438342 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:36.938434 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:39.437659 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:41.438288 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:43.937112 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:45.939462 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:47.439176 57426 pod_ready.go:97] node "old-k8s-version-694015" hosting pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-694015" has status "Ready":"False"
I0925 11:33:47.439201 57426 pod_ready.go:81] duration metric: took 3m1.018383263s waiting for pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace to be "Ready" ...
E0925 11:33:47.439210 57426 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-694015" hosting pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-694015" has status "Ready":"False"
I0925 11:33:47.439218 57426 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-rn247" in "kube-system" namespace to be "Ready" ...
I0925 11:33:47.441757 57426 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-rn247" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-rn247" not found
I0925 11:33:47.441785 57426 pod_ready.go:81] duration metric: took 2.55834ms waiting for pod "coredns-5644d7b6d9-rn247" in "kube-system" namespace to be "Ready" ...
E0925 11:33:47.441797 57426 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-rn247" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-rn247" not found
I0925 11:33:47.441806 57426 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gsdzk" in "kube-system" namespace to be "Ready" ...
I0925 11:33:47.447728 57426 pod_ready.go:97] node "old-k8s-version-694015" hosting pod "kube-proxy-gsdzk" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-694015" has status "Ready":"False"
I0925 11:33:47.447759 57426 pod_ready.go:81] duration metric: took 5.944858ms waiting for pod "kube-proxy-gsdzk" in "kube-system" namespace to be "Ready" ...
E0925 11:33:47.447770 57426 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-694015" hosting pod "kube-proxy-gsdzk" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-694015" has status "Ready":"False"
I0925 11:33:47.447777 57426 pod_ready.go:38] duration metric: took 3m1.031173472s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0925 11:33:47.447809 57426 api_server.go:52] waiting for apiserver process to appear ...
I0925 11:33:47.447887 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0925 11:33:47.480326 57426 logs.go:284] 1 containers: [34825b8222f1]
I0925 11:33:47.480410 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0925 11:33:47.500790 57426 logs.go:284] 1 containers: [4b655f8475a9]
I0925 11:33:47.500883 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0925 11:33:47.521967 57426 logs.go:284] 1 containers: [c4e353aa787b]
I0925 11:33:47.522043 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0925 11:33:47.542833 57426 logs.go:284] 1 containers: [08dbfa6061b3]
I0925 11:33:47.542921 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0925 11:33:47.564220 57426 logs.go:284] 1 containers: [2bccdb65c1cc]
I0925 11:33:47.564296 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0925 11:33:47.585142 57426 logs.go:284] 1 containers: [59225a8740b7]
I0925 11:33:47.585233 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0925 11:33:47.604606 57426 logs.go:284] 0 containers: []
W0925 11:33:47.604638 57426 logs.go:286] No container was found matching "kindnet"
I0925 11:33:47.604734 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0925 11:33:47.634903 57426 logs.go:284] 1 containers: [0f9de8bda7fb]
I0925 11:33:47.634987 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0925 11:33:47.659599 57426 logs.go:284] 1 containers: [90dc66317fc1]
I0925 11:33:47.659654 57426 logs.go:123] Gathering logs for kube-apiserver [34825b8222f1] ...
I0925 11:33:47.659677 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34825b8222f1"
I0925 11:33:47.713402 57426 logs.go:123] Gathering logs for kube-scheduler [08dbfa6061b3] ...
I0925 11:33:47.713441 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dbfa6061b3"
I0925 11:33:47.746308 57426 logs.go:123] Gathering logs for kubernetes-dashboard [0f9de8bda7fb] ...
I0925 11:33:47.746347 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f9de8bda7fb"
I0925 11:33:47.777953 57426 logs.go:123] Gathering logs for describe nodes ...
I0925 11:33:47.777991 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0925 11:33:47.933013 57426 logs.go:123] Gathering logs for coredns [c4e353aa787b] ...
I0925 11:33:47.933041 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4e353aa787b"
I0925 11:33:47.959588 57426 logs.go:123] Gathering logs for storage-provisioner [90dc66317fc1] ...
I0925 11:33:47.959623 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90dc66317fc1"
I0925 11:33:47.989240 57426 logs.go:123] Gathering logs for container status ...
I0925 11:33:47.989285 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0925 11:33:48.069991 57426 logs.go:123] Gathering logs for kubelet ...
I0925 11:33:48.070022 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0925 11:33:48.107511 57426 logs.go:138] Found kubelet problem: Sep 25 11:25:49 old-k8s-version-694015 kubelet[1664]: E0925 11:25:49.602400 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
W0925 11:33:48.108197 57426 logs.go:138] Found kubelet problem: Sep 25 11:25:50 old-k8s-version-694015 kubelet[1664]: E0925 11:25:50.619464 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.108438 57426 logs.go:138] Found kubelet problem: Sep 25 11:25:51 old-k8s-version-694015 kubelet[1664]: E0925 11:25:51.661072 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.108657 57426 logs.go:138] Found kubelet problem: Sep 25 11:26:02 old-k8s-version-694015 kubelet[1664]: E0925 11:26:02.792940 1664 pod_workers.go:191] Error syncing pod ecfa3d77-460f-4a09-b035-18707c06fed3 ("storage-provisioner_kube-system(ecfa3d77-460f-4a09-b035-18707c06fed3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ecfa3d77-460f-4a09-b035-18707c06fed3)"
W0925 11:33:48.109661 57426 logs.go:138] Found kubelet problem: Sep 25 11:26:05 old-k8s-version-694015 kubelet[1664]: E0925 11:26:05.020444 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
W0925 11:33:48.109891 57426 logs.go:138] Found kubelet problem: Sep 25 11:26:19 old-k8s-version-694015 kubelet[1664]: E0925 11:26:19.003368 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.110800 57426 logs.go:138] Found kubelet problem: Sep 25 11:26:32 old-k8s-version-694015 kubelet[1664]: E0925 11:26:32.051177 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
W0925 11:33:48.111045 57426 logs.go:138] Found kubelet problem: Sep 25 11:26:45 old-k8s-version-694015 kubelet[1664]: E0925 11:26:45.004295 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.111291 57426 logs.go:138] Found kubelet problem: Sep 25 11:26:58 old-k8s-version-694015 kubelet[1664]: E0925 11:26:58.003759 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.111524 57426 logs.go:138] Found kubelet problem: Sep 25 11:27:12 old-k8s-version-694015 kubelet[1664]: E0925 11:27:12.004264 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.112518 57426 logs.go:138] Found kubelet problem: Sep 25 11:27:27 old-k8s-version-694015 kubelet[1664]: E0925 11:27:27.023076 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
W0925 11:33:48.112765 57426 logs.go:138] Found kubelet problem: Sep 25 11:27:39 old-k8s-version-694015 kubelet[1664]: E0925 11:27:39.006534 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.112989 57426 logs.go:138] Found kubelet problem: Sep 25 11:27:53 old-k8s-version-694015 kubelet[1664]: E0925 11:27:53.006724 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.113221 57426 logs.go:138] Found kubelet problem: Sep 25 11:28:05 old-k8s-version-694015 kubelet[1664]: E0925 11:28:05.004093 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.113444 57426 logs.go:138] Found kubelet problem: Sep 25 11:28:20 old-k8s-version-694015 kubelet[1664]: E0925 11:28:20.003435 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.113656 57426 logs.go:138] Found kubelet problem: Sep 25 11:28:31 old-k8s-version-694015 kubelet[1664]: E0925 11:28:31.004553 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.113877 57426 logs.go:138] Found kubelet problem: Sep 25 11:28:42 old-k8s-version-694015 kubelet[1664]: E0925 11:28:42.007858 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.114848 57426 logs.go:138] Found kubelet problem: Sep 25 11:28:57 old-k8s-version-694015 kubelet[1664]: E0925 11:28:57.022019 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
W0925 11:33:48.115076 57426 logs.go:138] Found kubelet problem: Sep 25 11:29:10 old-k8s-version-694015 kubelet[1664]: E0925 11:29:10.005118 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.115297 57426 logs.go:138] Found kubelet problem: Sep 25 11:29:22 old-k8s-version-694015 kubelet[1664]: E0925 11:29:22.006659 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.115517 57426 logs.go:138] Found kubelet problem: Sep 25 11:29:34 old-k8s-version-694015 kubelet[1664]: E0925 11:29:34.004156 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.115743 57426 logs.go:138] Found kubelet problem: Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.115978 57426 logs.go:138] Found kubelet problem: Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.116194 57426 logs.go:138] Found kubelet problem: Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.148933 57426 logs.go:138] Found kubelet problem: Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939 6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
W0925 11:33:48.150648 57426 logs.go:138] Found kubelet problem: Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950 6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
I0925 11:33:48.152304 57426 logs.go:123] Gathering logs for dmesg ...
I0925 11:33:48.152321 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0925 11:33:48.170706 57426 logs.go:123] Gathering logs for etcd [4b655f8475a9] ...
I0925 11:33:48.170735 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b655f8475a9"
I0925 11:33:48.204533 57426 logs.go:123] Gathering logs for kube-proxy [2bccdb65c1cc] ...
I0925 11:33:48.204574 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bccdb65c1cc"
I0925 11:33:48.242201 57426 logs.go:123] Gathering logs for kube-controller-manager [59225a8740b7] ...
I0925 11:33:48.242239 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59225a8740b7"
I0925 11:33:48.305874 57426 logs.go:123] Gathering logs for Docker ...
I0925 11:33:48.305916 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0925 11:33:48.375041 57426 out.go:309] Setting ErrFile to fd 2...
I0925 11:33:48.375074 57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0925 11:33:48.375130 57426 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0925 11:33:48.375142 57426 out.go:239] Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.375161 57426 out.go:239] Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.375169 57426 out.go:239] Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.375176 57426 out.go:239] Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939 6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939 6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
W0925 11:33:48.375185 57426 out.go:239] Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950 6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950 6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
I0925 11:33:48.375190 57426 out.go:309] Setting ErrFile to fd 2...
I0925 11:33:48.375199 57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 11:33:58.376816 57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0925 11:33:58.397417 57426 api_server.go:72] duration metric: took 3m12.267407933s to wait for apiserver process to appear ...
I0925 11:33:58.397443 57426 api_server.go:88] waiting for apiserver healthz status ...
I0925 11:33:58.397517 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0925 11:33:58.423312 57426 logs.go:284] 1 containers: [34825b8222f1]
I0925 11:33:58.423385 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0925 11:33:58.443439 57426 logs.go:284] 1 containers: [4b655f8475a9]
I0925 11:33:58.443499 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0925 11:33:58.463360 57426 logs.go:284] 1 containers: [c4e353aa787b]
I0925 11:33:58.463443 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0925 11:33:58.486151 57426 logs.go:284] 1 containers: [08dbfa6061b3]
I0925 11:33:58.486228 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0925 11:33:58.507009 57426 logs.go:284] 1 containers: [2bccdb65c1cc]
I0925 11:33:58.507095 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0925 11:33:58.525571 57426 logs.go:284] 1 containers: [59225a8740b7]
I0925 11:33:58.525647 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0925 11:33:58.542397 57426 logs.go:284] 0 containers: []
W0925 11:33:58.542424 57426 logs.go:286] No container was found matching "kindnet"
I0925 11:33:58.542481 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0925 11:33:58.562186 57426 logs.go:284] 1 containers: [0f9de8bda7fb]
I0925 11:33:58.562260 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0925 11:33:58.580984 57426 logs.go:284] 1 containers: [90dc66317fc1]
I0925 11:33:58.581014 57426 logs.go:123] Gathering logs for describe nodes ...
I0925 11:33:58.581030 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0925 11:33:58.731921 57426 logs.go:123] Gathering logs for kube-proxy [2bccdb65c1cc] ...
I0925 11:33:58.731958 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bccdb65c1cc"
I0925 11:33:58.759982 57426 logs.go:123] Gathering logs for Docker ...
I0925 11:33:58.760017 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0925 11:33:58.817088 57426 logs.go:123] Gathering logs for kubelet ...
I0925 11:33:58.817120 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0925 11:33:58.851581 57426 logs.go:138] Found kubelet problem: Sep 25 11:25:49 old-k8s-version-694015 kubelet[1664]: E0925 11:25:49.602400 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
W0925 11:33:58.852006 57426 logs.go:138] Found kubelet problem: Sep 25 11:25:50 old-k8s-version-694015 kubelet[1664]: E0925 11:25:50.619464 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:58.852226 57426 logs.go:138] Found kubelet problem: Sep 25 11:25:51 old-k8s-version-694015 kubelet[1664]: E0925 11:25:51.661072 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:58.852405 57426 logs.go:138] Found kubelet problem: Sep 25 11:26:02 old-k8s-version-694015 kubelet[1664]: E0925 11:26:02.792940 1664 pod_workers.go:191] Error syncing pod ecfa3d77-460f-4a09-b035-18707c06fed3 ("storage-provisioner_kube-system(ecfa3d77-460f-4a09-b035-18707c06fed3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ecfa3d77-460f-4a09-b035-18707c06fed3)"
W0925 11:33:58.853080 57426 logs.go:138] Found kubelet problem: Sep 25 11:26:05 old-k8s-version-694015 kubelet[1664]: E0925 11:26:05.020444 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
W0925 11:33:58.853245 57426 logs.go:138] Found kubelet problem: Sep 25 11:26:19 old-k8s-version-694015 kubelet[1664]: E0925 11:26:19.003368 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:58.853866 57426 logs.go:138] Found kubelet problem: Sep 25 11:26:32 old-k8s-version-694015 kubelet[1664]: E0925 11:26:32.051177 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
W0925 11:33:58.854027 57426 logs.go:138] Found kubelet problem: Sep 25 11:26:45 old-k8s-version-694015 kubelet[1664]: E0925 11:26:45.004295 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:58.854211 57426 logs.go:138] Found kubelet problem: Sep 25 11:26:58 old-k8s-version-694015 kubelet[1664]: E0925 11:26:58.003759 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:58.854408 57426 logs.go:138] Found kubelet problem: Sep 25 11:27:12 old-k8s-version-694015 kubelet[1664]: E0925 11:27:12.004264 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:58.855047 57426 logs.go:138] Found kubelet problem: Sep 25 11:27:27 old-k8s-version-694015 kubelet[1664]: E0925 11:27:27.023076 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
W0925 11:33:58.855223 57426 logs.go:138] Found kubelet problem: Sep 25 11:27:39 old-k8s-version-694015 kubelet[1664]: E0925 11:27:39.006534 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:58.855403 57426 logs.go:138] Found kubelet problem: Sep 25 11:27:53 old-k8s-version-694015 kubelet[1664]: E0925 11:27:53.006724 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:58.855601 57426 logs.go:138] Found kubelet problem: Sep 25 11:28:05 old-k8s-version-694015 kubelet[1664]: E0925 11:28:05.004093 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:58.855811 57426 logs.go:138] Found kubelet problem: Sep 25 11:28:20 old-k8s-version-694015 kubelet[1664]: E0925 11:28:20.003435 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:58.856008 57426 logs.go:138] Found kubelet problem: Sep 25 11:28:31 old-k8s-version-694015 kubelet[1664]: E0925 11:28:31.004553 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:58.856210 57426 logs.go:138] Found kubelet problem: Sep 25 11:28:42 old-k8s-version-694015 kubelet[1664]: E0925 11:28:42.007858 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:58.856868 57426 logs.go:138] Found kubelet problem: Sep 25 11:28:57 old-k8s-version-694015 kubelet[1664]: E0925 11:28:57.022019 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
W0925 11:33:58.857032 57426 logs.go:138] Found kubelet problem: Sep 25 11:29:10 old-k8s-version-694015 kubelet[1664]: E0925 11:29:10.005118 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:58.857219 57426 logs.go:138] Found kubelet problem: Sep 25 11:29:22 old-k8s-version-694015 kubelet[1664]: E0925 11:29:22.006659 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:58.857418 57426 logs.go:138] Found kubelet problem: Sep 25 11:29:34 old-k8s-version-694015 kubelet[1664]: E0925 11:29:34.004156 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:58.857616 57426 logs.go:138] Found kubelet problem: Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:58.857814 57426 logs.go:138] Found kubelet problem: Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:58.858011 57426 logs.go:138] Found kubelet problem: Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:58.889357 57426 logs.go:138] Found kubelet problem: Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939 6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
W0925 11:33:58.891108 57426 logs.go:138] Found kubelet problem: Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950 6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
I0925 11:33:58.893160 57426 logs.go:123] Gathering logs for kube-scheduler [08dbfa6061b3] ...
I0925 11:33:58.893178 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dbfa6061b3"
I0925 11:33:58.927223 57426 logs.go:123] Gathering logs for etcd [4b655f8475a9] ...
I0925 11:33:58.927264 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b655f8475a9"
I0925 11:33:58.951343 57426 logs.go:123] Gathering logs for coredns [c4e353aa787b] ...
I0925 11:33:58.951376 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4e353aa787b"
I0925 11:33:58.979268 57426 logs.go:123] Gathering logs for kubernetes-dashboard [0f9de8bda7fb] ...
I0925 11:33:58.979303 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f9de8bda7fb"
I0925 11:33:59.010031 57426 logs.go:123] Gathering logs for kube-apiserver [34825b8222f1] ...
I0925 11:33:59.010059 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34825b8222f1"
I0925 11:33:59.050333 57426 logs.go:123] Gathering logs for kube-controller-manager [59225a8740b7] ...
I0925 11:33:59.050367 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59225a8740b7"
I0925 11:33:59.093782 57426 logs.go:123] Gathering logs for storage-provisioner [90dc66317fc1] ...
I0925 11:33:59.093820 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90dc66317fc1"
I0925 11:33:59.118196 57426 logs.go:123] Gathering logs for container status ...
I0925 11:33:59.118222 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0925 11:33:59.228267 57426 logs.go:123] Gathering logs for dmesg ...
I0925 11:33:59.228306 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0925 11:33:59.247426 57426 out.go:309] Setting ErrFile to fd 2...
I0925 11:33:59.247459 57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0925 11:33:59.247517 57426 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0925 11:33:59.247534 57426 out.go:239] Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:59.247545 57426 out.go:239] Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:59.247554 57426 out.go:239] Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:59.247563 57426 out.go:239] Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939 6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939 6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
W0925 11:33:59.247574 57426 out.go:239] Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950 6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950 6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
I0925 11:33:59.247584 57426 out.go:309] Setting ErrFile to fd 2...
I0925 11:33:59.247597 57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 11:34:09.249955 57426 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
I0925 11:34:09.256612 57426 api_server.go:279] https://192.168.50.17:8443/healthz returned 200:
ok
I0925 11:34:09.257809 57426 api_server.go:141] control plane version: v1.16.0
I0925 11:34:09.257827 57426 api_server.go:131] duration metric: took 10.860379501s to wait for apiserver health ...
I0925 11:34:09.257833 57426 system_pods.go:43] waiting for kube-system pods to appear ...
I0925 11:34:09.257883 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0925 11:34:09.280149 57426 logs.go:284] 1 containers: [34825b8222f1]
I0925 11:34:09.280233 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0925 11:34:09.300127 57426 logs.go:284] 1 containers: [4b655f8475a9]
I0925 11:34:09.300211 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0925 11:34:09.332581 57426 logs.go:284] 1 containers: [c4e353aa787b]
I0925 11:34:09.332656 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0925 11:34:09.352994 57426 logs.go:284] 1 containers: [08dbfa6061b3]
I0925 11:34:09.353061 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0925 11:34:09.374892 57426 logs.go:284] 1 containers: [2bccdb65c1cc]
I0925 11:34:09.374960 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0925 11:34:09.395820 57426 logs.go:284] 1 containers: [59225a8740b7]
I0925 11:34:09.395884 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0925 11:34:09.414225 57426 logs.go:284] 0 containers: []
W0925 11:34:09.414245 57426 logs.go:286] No container was found matching "kindnet"
I0925 11:34:09.414284 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0925 11:34:09.434336 57426 logs.go:284] 1 containers: [0f9de8bda7fb]
I0925 11:34:09.434398 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0925 11:34:09.456185 57426 logs.go:284] 1 containers: [90dc66317fc1]
I0925 11:34:09.456218 57426 logs.go:123] Gathering logs for describe nodes ...
I0925 11:34:09.456231 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0925 11:34:09.590378 57426 logs.go:123] Gathering logs for kube-scheduler [08dbfa6061b3] ...
I0925 11:34:09.590409 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dbfa6061b3"
I0925 11:34:09.617599 57426 logs.go:123] Gathering logs for kube-proxy [2bccdb65c1cc] ...
I0925 11:34:09.617624 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bccdb65c1cc"
I0925 11:34:09.643431 57426 logs.go:123] Gathering logs for kubernetes-dashboard [0f9de8bda7fb] ...
I0925 11:34:09.643459 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f9de8bda7fb"
I0925 11:34:09.665103 57426 logs.go:123] Gathering logs for etcd [4b655f8475a9] ...
I0925 11:34:09.665129 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b655f8475a9"
I0925 11:34:09.693931 57426 logs.go:123] Gathering logs for kube-controller-manager [59225a8740b7] ...
I0925 11:34:09.693963 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59225a8740b7"
I0925 11:34:09.742784 57426 logs.go:123] Gathering logs for Docker ...
I0925 11:34:09.742812 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0925 11:34:09.804145 57426 logs.go:123] Gathering logs for dmesg ...
I0925 11:34:09.804177 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0925 11:34:09.818586 57426 logs.go:123] Gathering logs for kube-apiserver [34825b8222f1] ...
I0925 11:34:09.818609 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34825b8222f1"
I0925 11:34:09.857846 57426 logs.go:123] Gathering logs for coredns [c4e353aa787b] ...
I0925 11:34:09.857875 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4e353aa787b"
I0925 11:34:09.880799 57426 logs.go:123] Gathering logs for container status ...
I0925 11:34:09.880828 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0925 11:34:09.950547 57426 logs.go:123] Gathering logs for kubelet ...
I0925 11:34:09.950572 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0925 11:34:09.983084 57426 logs.go:138] Found kubelet problem: Sep 25 11:25:49 old-k8s-version-694015 kubelet[1664]: E0925 11:25:49.602400 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
W0925 11:34:09.983479 57426 logs.go:138] Found kubelet problem: Sep 25 11:25:50 old-k8s-version-694015 kubelet[1664]: E0925 11:25:50.619464 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:09.983617 57426 logs.go:138] Found kubelet problem: Sep 25 11:25:51 old-k8s-version-694015 kubelet[1664]: E0925 11:25:51.661072 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:09.983758 57426 logs.go:138] Found kubelet problem: Sep 25 11:26:02 old-k8s-version-694015 kubelet[1664]: E0925 11:26:02.792940 1664 pod_workers.go:191] Error syncing pod ecfa3d77-460f-4a09-b035-18707c06fed3 ("storage-provisioner_kube-system(ecfa3d77-460f-4a09-b035-18707c06fed3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ecfa3d77-460f-4a09-b035-18707c06fed3)"
W0925 11:34:09.984405 57426 logs.go:138] Found kubelet problem: Sep 25 11:26:05 old-k8s-version-694015 kubelet[1664]: E0925 11:26:05.020444 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
W0925 11:34:09.984547 57426 logs.go:138] Found kubelet problem: Sep 25 11:26:19 old-k8s-version-694015 kubelet[1664]: E0925 11:26:19.003368 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:09.985367 57426 logs.go:138] Found kubelet problem: Sep 25 11:26:32 old-k8s-version-694015 kubelet[1664]: E0925 11:26:32.051177 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
W0925 11:34:09.985576 57426 logs.go:138] Found kubelet problem: Sep 25 11:26:45 old-k8s-version-694015 kubelet[1664]: E0925 11:26:45.004295 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:09.985713 57426 logs.go:138] Found kubelet problem: Sep 25 11:26:58 old-k8s-version-694015 kubelet[1664]: E0925 11:26:58.003759 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:09.985898 57426 logs.go:138] Found kubelet problem: Sep 25 11:27:12 old-k8s-version-694015 kubelet[1664]: E0925 11:27:12.004264 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:09.986632 57426 logs.go:138] Found kubelet problem: Sep 25 11:27:27 old-k8s-version-694015 kubelet[1664]: E0925 11:27:27.023076 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
W0925 11:34:09.986786 57426 logs.go:138] Found kubelet problem: Sep 25 11:27:39 old-k8s-version-694015 kubelet[1664]: E0925 11:27:39.006534 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:09.986945 57426 logs.go:138] Found kubelet problem: Sep 25 11:27:53 old-k8s-version-694015 kubelet[1664]: E0925 11:27:53.006724 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:09.987132 57426 logs.go:138] Found kubelet problem: Sep 25 11:28:05 old-k8s-version-694015 kubelet[1664]: E0925 11:28:05.004093 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:09.987279 57426 logs.go:138] Found kubelet problem: Sep 25 11:28:20 old-k8s-version-694015 kubelet[1664]: E0925 11:28:20.003435 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:09.987469 57426 logs.go:138] Found kubelet problem: Sep 25 11:28:31 old-k8s-version-694015 kubelet[1664]: E0925 11:28:31.004553 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:09.987663 57426 logs.go:138] Found kubelet problem: Sep 25 11:28:42 old-k8s-version-694015 kubelet[1664]: E0925 11:28:42.007858 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:09.988255 57426 logs.go:138] Found kubelet problem: Sep 25 11:28:57 old-k8s-version-694015 kubelet[1664]: E0925 11:28:57.022019 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
W0925 11:34:09.988398 57426 logs.go:138] Found kubelet problem: Sep 25 11:29:10 old-k8s-version-694015 kubelet[1664]: E0925 11:29:10.005118 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:09.988533 57426 logs.go:138] Found kubelet problem: Sep 25 11:29:22 old-k8s-version-694015 kubelet[1664]: E0925 11:29:22.006659 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:09.988685 57426 logs.go:138] Found kubelet problem: Sep 25 11:29:34 old-k8s-version-694015 kubelet[1664]: E0925 11:29:34.004156 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:09.988822 57426 logs.go:138] Found kubelet problem: Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:09.988958 57426 logs.go:138] Found kubelet problem: Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:09.989093 57426 logs.go:138] Found kubelet problem: Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:10.020550 57426 logs.go:138] Found kubelet problem: Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939 6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
W0925 11:34:10.022302 57426 logs.go:138] Found kubelet problem: Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950 6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
I0925 11:34:10.024541 57426 logs.go:123] Gathering logs for storage-provisioner [90dc66317fc1] ...
I0925 11:34:10.024558 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90dc66317fc1"
I0925 11:34:10.053454 57426 out.go:309] Setting ErrFile to fd 2...
I0925 11:34:10.053477 57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0925 11:34:10.053524 57426 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0925 11:34:10.053535 57426 out.go:239] Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:10.053543 57426 out.go:239] Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:10.053551 57426 out.go:239] Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:10.053557 57426 out.go:239] Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939 6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939 6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
W0925 11:34:10.053563 57426 out.go:239] Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950 6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950 6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
I0925 11:34:10.053568 57426 out.go:309] Setting ErrFile to fd 2...
I0925 11:34:10.053573 57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 11:34:20.061232 57426 system_pods.go:59] 8 kube-system pods found
I0925 11:34:20.061260 57426 system_pods.go:61] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:34:20.061267 57426 system_pods.go:61] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:34:20.061271 57426 system_pods.go:61] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:34:20.061277 57426 system_pods.go:61] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:34:20.061284 57426 system_pods.go:61] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:34:20.061288 57426 system_pods.go:61] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:34:20.061295 57426 system_pods.go:61] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:34:20.061300 57426 system_pods.go:61] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:34:20.061307 57426 system_pods.go:74] duration metric: took 10.803468736s to wait for pod list to return data ...
I0925 11:34:20.061314 57426 default_sa.go:34] waiting for default service account to be created ...
I0925 11:34:20.064090 57426 default_sa.go:45] found service account: "default"
I0925 11:34:20.064114 57426 default_sa.go:55] duration metric: took 2.793638ms for default service account to be created ...
I0925 11:34:20.064123 57426 system_pods.go:116] waiting for k8s-apps to be running ...
I0925 11:34:20.068614 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:34:20.068644 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:34:20.068653 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:34:20.068674 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:34:20.068682 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:34:20.068690 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:34:20.068696 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:34:20.068707 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:34:20.068719 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:34:20.068739 57426 retry.go:31] will retry after 201.15744ms: missing components: kube-dns, kube-proxy
I0925 11:34:20.275900 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:34:20.275943 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:34:20.275952 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:34:20.275960 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:34:20.275967 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:34:20.275974 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:34:20.275982 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:34:20.275992 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:34:20.276001 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:34:20.276021 57426 retry.go:31] will retry after 295.538203ms: missing components: kube-dns, kube-proxy
I0925 11:34:20.579425 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:34:20.579469 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:34:20.579480 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:34:20.579489 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:34:20.579497 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:34:20.579506 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:34:20.579513 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:34:20.579522 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:34:20.579531 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:34:20.579553 57426 retry.go:31] will retry after 438.061345ms: missing components: kube-dns, kube-proxy
I0925 11:34:21.024313 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:34:21.024351 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:34:21.024360 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:34:21.024365 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:34:21.024372 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:34:21.024381 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:34:21.024390 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:34:21.024401 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:34:21.024411 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:34:21.024428 57426 retry.go:31] will retry after 504.61622ms: missing components: kube-dns, kube-proxy
I0925 11:34:21.536419 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:34:21.536449 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:34:21.536460 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:34:21.536466 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:34:21.536470 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:34:21.536476 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:34:21.536480 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:34:21.536486 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:34:21.536492 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:34:21.536506 57426 retry.go:31] will retry after 484.39135ms: missing components: kube-dns, kube-proxy
I0925 11:34:22.027728 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:34:22.027766 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:34:22.027776 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:34:22.027783 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:34:22.027787 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:34:22.027796 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:34:22.027804 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:34:22.027814 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:34:22.027822 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:34:22.027838 57426 retry.go:31] will retry after 680.21989ms: missing components: kube-dns, kube-proxy
I0925 11:34:22.714282 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:34:22.714315 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:34:22.714326 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:34:22.714335 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:34:22.714342 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:34:22.714349 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:34:22.714354 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:34:22.714365 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:34:22.714381 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:34:22.714399 57426 retry.go:31] will retry after 719.383007ms: missing components: kube-dns, kube-proxy
I0925 11:34:23.438829 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:34:23.438855 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:34:23.438862 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:34:23.438867 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:34:23.438872 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:34:23.438877 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:34:23.438882 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:34:23.438891 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:34:23.438898 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:34:23.438912 57426 retry.go:31] will retry after 1.277927153s: missing components: kube-dns, kube-proxy
I0925 11:34:24.724821 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:34:24.724855 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:34:24.724864 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:34:24.724871 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:34:24.724878 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:34:24.724887 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:34:24.724894 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:34:24.724904 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:34:24.724919 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:34:24.724942 57426 retry.go:31] will retry after 1.757108265s: missing components: kube-dns, kube-proxy
I0925 11:34:26.488127 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:34:26.488156 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:34:26.488163 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:34:26.488182 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:34:26.488203 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:34:26.488213 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:34:26.488222 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:34:26.488232 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:34:26.488247 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:34:26.488266 57426 retry.go:31] will retry after 1.427718537s: missing components: kube-dns, kube-proxy
I0925 11:34:27.921755 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:34:27.921783 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:34:27.921790 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:34:27.921795 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:34:27.921800 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:34:27.921805 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:34:27.921810 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:34:27.921815 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:34:27.921821 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:34:27.921835 57426 retry.go:31] will retry after 1.957734881s: missing components: kube-dns, kube-proxy
I0925 11:34:29.885748 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:34:29.885776 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:34:29.885783 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:34:29.885789 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:34:29.885794 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:34:29.885799 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:34:29.885803 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:34:29.885810 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:34:29.885815 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:34:29.885830 57426 retry.go:31] will retry after 3.054467533s: missing components: kube-dns, kube-proxy
I0925 11:34:32.946353 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:34:32.946383 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:34:32.946391 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:34:32.946396 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:34:32.946401 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:34:32.946406 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:34:32.946410 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:34:32.946416 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:34:32.946421 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:34:32.946434 57426 retry.go:31] will retry after 3.761041339s: missing components: kube-dns, kube-proxy
I0925 11:34:36.713729 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:34:36.713754 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:34:36.713761 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:34:36.713767 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:34:36.713772 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:34:36.713777 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:34:36.713781 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:34:36.713788 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:34:36.713793 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:34:36.713807 57426 retry.go:31] will retry after 4.734467176s: missing components: kube-dns, kube-proxy
I0925 11:34:41.454464 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:34:41.454492 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:34:41.454498 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:34:41.454503 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:34:41.454508 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:34:41.454513 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:34:41.454518 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:34:41.454524 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:34:41.454529 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:34:41.454542 57426 retry.go:31] will retry after 4.698913888s: missing components: kube-dns, kube-proxy
I0925 11:34:46.159214 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:34:46.159255 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:34:46.159266 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:34:46.159275 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:34:46.159282 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:34:46.159292 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:34:46.159299 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:34:46.159314 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:34:46.159328 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:34:46.159350 57426 retry.go:31] will retry after 5.507304477s: missing components: kube-dns, kube-proxy
I0925 11:34:51.672849 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:34:51.672877 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:34:51.672884 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:34:51.672889 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:34:51.672894 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:34:51.672899 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:34:51.672905 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:34:51.672914 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:34:51.672919 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:34:51.672933 57426 retry.go:31] will retry after 8.254229342s: missing components: kube-dns, kube-proxy
I0925 11:34:59.936057 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:34:59.936086 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:34:59.936094 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:34:59.936099 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:34:59.936104 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:34:59.936109 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:34:59.936114 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:34:59.936119 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:34:59.936125 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:34:59.936139 57426 retry.go:31] will retry after 9.535060954s: missing components: kube-dns, kube-proxy
I0925 11:35:09.479385 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:35:09.479413 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:35:09.479420 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:35:09.479428 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:35:09.479433 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:35:09.479441 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:35:09.479446 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:35:09.479452 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:35:09.479459 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:35:09.479471 57426 retry.go:31] will retry after 13.479799453s: missing components: kube-dns, kube-proxy
I0925 11:35:22.964926 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:35:22.964955 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:35:22.964962 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:35:22.964967 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:35:22.964972 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:35:22.964977 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:35:22.964982 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:35:22.964988 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:35:22.964993 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:35:22.965006 57426 retry.go:31] will retry after 14.199608167s: missing components: kube-dns, kube-proxy
I0925 11:35:37.171988 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:35:37.172022 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:35:37.172034 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:35:37.172041 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:35:37.172048 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:35:37.172055 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:35:37.172061 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:35:37.172072 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:35:37.172083 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:35:37.172101 57426 retry.go:31] will retry after 17.274040235s: missing components: kube-dns, kube-proxy
I0925 11:35:54.452675 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:35:54.452702 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:35:54.452709 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:35:54.452714 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:35:54.452719 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:35:54.452727 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:35:54.452731 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:35:54.452738 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:35:54.452743 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:35:54.452756 57426 retry.go:31] will retry after 28.29436119s: missing components: kube-dns, kube-proxy
I0925 11:36:22.755662 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:36:22.755700 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:36:22.755710 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:36:22.755718 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:36:22.755724 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:36:22.755732 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:36:22.755746 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:36:22.755761 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:36:22.755771 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:36:22.755791 57426 retry.go:31] will retry after 35.525659438s: missing components: kube-dns, kube-proxy
I0925 11:36:58.289849 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:36:58.289887 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:36:58.289896 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:36:58.289901 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:36:58.289910 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:36:58.289919 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:36:58.289927 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:36:58.289939 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:36:58.289950 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:36:58.289971 57426 retry.go:31] will retry after 44.058995008s: missing components: kube-dns, kube-proxy
I0925 11:37:42.356673 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:37:42.356698 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:37:42.356705 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:37:42.356710 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:37:42.356715 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:37:42.356721 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:37:42.356725 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:37:42.356731 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:37:42.356736 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:37:42.356752 57426 retry.go:31] will retry after 47.757072258s: missing components: kube-dns, kube-proxy
I0925 11:38:30.124408 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:38:30.124436 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:38:30.124443 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:38:30.124449 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:38:30.124454 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:38:30.124459 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:38:30.124464 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:38:30.124470 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:38:30.124475 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:38:30.124490 57426 retry.go:31] will retry after 48.54868015s: missing components: kube-dns, kube-proxy
I0925 11:39:18.680525 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:39:18.680555 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:39:18.680561 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:39:18.680567 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:39:18.680572 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:39:18.680578 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:39:18.680582 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:39:18.680589 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:39:18.680594 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:39:18.680607 57426 retry.go:31] will retry after 53.095866632s: missing components: kube-dns, kube-proxy
I0925 11:40:11.783486 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:40:11.783513 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:40:11.783520 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:40:11.783527 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:40:11.783532 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:40:11.783537 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:40:11.783542 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:40:11.783548 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:40:11.783553 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:40:11.786119 57426 out.go:177]
W0925 11:40:11.787697 57426 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns, kube-proxy
X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns, kube-proxy
W0925 11:40:11.787711 57426 out.go:239] *
*
W0925 11:40:11.788461 57426 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0925 11:40:11.790057 57426 out.go:177]
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-694015 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2 --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-694015 -n old-k8s-version-694015
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p old-k8s-version-694015 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs:
-- stdout --
*
* ==> Audit <==
* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
| pause | -p newest-cni-372603 | newest-cni-372603 | jenkins | v1.31.2 | 25 Sep 23 11:26 UTC | 25 Sep 23 11:26 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p newest-cni-372603 | newest-cni-372603 | jenkins | v1.31.2 | 25 Sep 23 11:26 UTC | 25 Sep 23 11:26 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p newest-cni-372603 | newest-cni-372603 | jenkins | v1.31.2 | 25 Sep 23 11:26 UTC | 25 Sep 23 11:26 UTC |
| delete | -p newest-cni-372603 | newest-cni-372603 | jenkins | v1.31.2 | 25 Sep 23 11:26 UTC | 25 Sep 23 11:26 UTC |
| delete | -p | disable-driver-mounts-785493 | jenkins | v1.31.2 | 25 Sep 23 11:26 UTC | 25 Sep 23 11:26 UTC |
| | disable-driver-mounts-785493 | | | | | |
| start | -p embed-certs-094323 | embed-certs-094323 | jenkins | v1.31.2 | 25 Sep 23 11:26 UTC | 25 Sep 23 11:27 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --embed-certs --driver=kvm2 | | | | | |
| | --kubernetes-version=v1.28.2 | | | | | |
| addons | enable metrics-server -p embed-certs-094323 | embed-certs-094323 | jenkins | v1.31.2 | 25 Sep 23 11:28 UTC | 25 Sep 23 11:28 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p embed-certs-094323 | embed-certs-094323 | jenkins | v1.31.2 | 25 Sep 23 11:28 UTC | 25 Sep 23 11:28 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p embed-certs-094323 | embed-certs-094323 | jenkins | v1.31.2 | 25 Sep 23 11:28 UTC | 25 Sep 23 11:28 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p embed-certs-094323 | embed-certs-094323 | jenkins | v1.31.2 | 25 Sep 23 11:28 UTC | 25 Sep 23 11:33 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --embed-certs --driver=kvm2 | | | | | |
| | --kubernetes-version=v1.28.2 | | | | | |
| ssh | -p no-preload-863905 sudo | no-preload-863905 | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
| | crictl images -o json | | | | | |
| pause | -p no-preload-863905 | no-preload-863905 | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p no-preload-863905 | no-preload-863905 | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p no-preload-863905 | no-preload-863905 | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
| delete | -p no-preload-863905 | no-preload-863905 | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
| ssh | -p | default-k8s-diff-port-319133 | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
| | default-k8s-diff-port-319133 | | | | | |
| | sudo crictl images -o json | | | | | |
| pause | -p | default-k8s-diff-port-319133 | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
| | default-k8s-diff-port-319133 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p | default-k8s-diff-port-319133 | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
| | default-k8s-diff-port-319133 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p | default-k8s-diff-port-319133 | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
| | default-k8s-diff-port-319133 | | | | | |
| delete | -p | default-k8s-diff-port-319133 | jenkins | v1.31.2 | 25 Sep 23 11:30 UTC | 25 Sep 23 11:30 UTC |
| | default-k8s-diff-port-319133 | | | | | |
| ssh | -p embed-certs-094323 sudo | embed-certs-094323 | jenkins | v1.31.2 | 25 Sep 23 11:34 UTC | 25 Sep 23 11:34 UTC |
| | crictl images -o json | | | | | |
| pause | -p embed-certs-094323 | embed-certs-094323 | jenkins | v1.31.2 | 25 Sep 23 11:34 UTC | 25 Sep 23 11:34 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p embed-certs-094323 | embed-certs-094323 | jenkins | v1.31.2 | 25 Sep 23 11:34 UTC | 25 Sep 23 11:34 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p embed-certs-094323 | embed-certs-094323 | jenkins | v1.31.2 | 25 Sep 23 11:34 UTC | 25 Sep 23 11:34 UTC |
| delete | -p embed-certs-094323 | embed-certs-094323 | jenkins | v1.31.2 | 25 Sep 23 11:34 UTC | 25 Sep 23 11:34 UTC |
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/09/25 11:28:19
Running on machine: ubuntu-20-agent-3
Binary: Built with gc go1.21.1 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0925 11:28:19.035134 59899 out.go:296] Setting OutFile to fd 1 ...
I0925 11:28:19.035380 59899 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 11:28:19.035388 59899 out.go:309] Setting ErrFile to fd 2...
I0925 11:28:19.035392 59899 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 11:28:19.035594 59899 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17297-6032/.minikube/bin
I0925 11:28:19.036084 59899 out.go:303] Setting JSON to false
I0925 11:28:19.037024 59899 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":4250,"bootTime":1695637049,"procs":236,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0925 11:28:19.037076 59899 start.go:138] virtualization: kvm guest
I0925 11:28:19.039385 59899 out.go:177] * [embed-certs-094323] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
I0925 11:28:19.041106 59899 out.go:177] - MINIKUBE_LOCATION=17297
I0925 11:28:19.041220 59899 notify.go:220] Checking for updates...
I0925 11:28:19.042531 59899 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0925 11:28:19.043924 59899 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/17297-6032/kubeconfig
I0925 11:28:19.045264 59899 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/17297-6032/.minikube
I0925 11:28:19.046665 59899 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0925 11:28:19.047943 59899 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0925 11:28:19.049713 59899 config.go:182] Loaded profile config "embed-certs-094323": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0925 11:28:19.050284 59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:28:19.050336 59899 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:28:19.066768 59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42209
I0925 11:28:19.067166 59899 main.go:141] libmachine: () Calling .GetVersion
I0925 11:28:19.067840 59899 main.go:141] libmachine: Using API Version 1
I0925 11:28:19.067866 59899 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:28:19.068328 59899 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:28:19.068548 59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
I0925 11:28:19.069227 59899 driver.go:373] Setting default libvirt URI to qemu:///system
I0925 11:28:19.070747 59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:28:19.070796 59899 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:28:19.084889 59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38725
I0925 11:28:19.085259 59899 main.go:141] libmachine: () Calling .GetVersion
I0925 11:28:19.085647 59899 main.go:141] libmachine: Using API Version 1
I0925 11:28:19.085666 59899 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:28:19.085966 59899 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:28:19.086156 59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
I0925 11:28:19.120695 59899 out.go:177] * Using the kvm2 driver based on existing profile
I0925 11:28:19.122195 59899 start.go:298] selected driver: kvm2
I0925 11:28:19.122213 59899 start.go:902] validating driver "kvm2" against &{Name:embed-certs-094323 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:embed-certs-094323 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.111 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[]
ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
I0925 11:28:19.122331 59899 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0925 11:28:19.122990 59899 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0925 11:28:19.123070 59899 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17297-6032/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0925 11:28:19.137559 59899 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
I0925 11:28:19.137967 59899 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0925 11:28:19.138031 59899 cni.go:84] Creating CNI manager for ""
I0925 11:28:19.138049 59899 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0925 11:28:19.138061 59899 start_flags.go:321] config:
{Name:embed-certs-094323 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:embed-certs-094323 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.111 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
I0925 11:28:19.138243 59899 iso.go:125] acquiring lock: {Name:mkb9e2f6e1d5a2b50ee182236ae1b19ef3677829 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0925 11:28:19.139914 59899 out.go:177] * Starting control plane node embed-certs-094323 in cluster embed-certs-094323
I0925 11:28:19.141213 59899 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
I0925 11:28:19.141251 59899 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17297-6032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
I0925 11:28:19.141267 59899 cache.go:57] Caching tarball of preloaded images
I0925 11:28:19.141342 59899 preload.go:174] Found /home/jenkins/minikube-integration/17297-6032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0925 11:28:19.141351 59899 cache.go:60] Finished verifying existence of preloaded tar for v1.28.2 on docker
I0925 11:28:19.141434 59899 profile.go:148] Saving config to /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323/config.json ...
I0925 11:28:19.141593 59899 start.go:365] acquiring machines lock for embed-certs-094323: {Name:mk02fb3d97d6ed60b07ca18d96424c593d1bb8d0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0925 11:28:19.141630 59899 start.go:369] acquired machines lock for "embed-certs-094323" in 22.488µs
I0925 11:28:19.141643 59899 start.go:96] Skipping create...Using existing machine configuration
I0925 11:28:19.141651 59899 fix.go:54] fixHost starting:
I0925 11:28:19.141918 59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:28:19.141948 59899 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:28:19.155211 59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41981
I0925 11:28:19.155620 59899 main.go:141] libmachine: () Calling .GetVersion
I0925 11:28:19.156032 59899 main.go:141] libmachine: Using API Version 1
I0925 11:28:19.156055 59899 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:28:19.156384 59899 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:28:19.156590 59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
I0925 11:28:19.156767 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetState
I0925 11:28:19.158188 59899 fix.go:102] recreateIfNeeded on embed-certs-094323: state=Stopped err=<nil>
I0925 11:28:19.158223 59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
W0925 11:28:19.158395 59899 fix.go:128] unexpected machine state, will restart: <nil>
I0925 11:28:19.160159 59899 out.go:177] * Restarting existing kvm2 VM for "embed-certs-094323" ...
I0925 11:28:15.403806 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:17.404448 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:19.405067 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:15.674829 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:18.175095 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:20.492932 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:22.991315 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:19.161340 59899 main.go:141] libmachine: (embed-certs-094323) Calling .Start
I0925 11:28:19.161501 59899 main.go:141] libmachine: (embed-certs-094323) Ensuring networks are active...
I0925 11:28:19.162257 59899 main.go:141] libmachine: (embed-certs-094323) Ensuring network default is active
I0925 11:28:19.162588 59899 main.go:141] libmachine: (embed-certs-094323) Ensuring network mk-embed-certs-094323 is active
I0925 11:28:19.163048 59899 main.go:141] libmachine: (embed-certs-094323) Getting domain xml...
I0925 11:28:19.163763 59899 main.go:141] libmachine: (embed-certs-094323) Creating domain...
I0925 11:28:20.442361 59899 main.go:141] libmachine: (embed-certs-094323) Waiting to get IP...
I0925 11:28:20.443271 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:20.443734 59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
I0925 11:28:20.443823 59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:20.443734 59935 retry.go:31] will retry after 267.692283ms: waiting for machine to come up
I0925 11:28:20.713388 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:20.713952 59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
I0925 11:28:20.713983 59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:20.713901 59935 retry.go:31] will retry after 277.980932ms: waiting for machine to come up
I0925 11:28:20.993556 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:20.994198 59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
I0925 11:28:20.994234 59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:20.994172 59935 retry.go:31] will retry after 459.010271ms: waiting for machine to come up
I0925 11:28:21.454879 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:21.455430 59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
I0925 11:28:21.455461 59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:21.455383 59935 retry.go:31] will retry after 366.809435ms: waiting for machine to come up
I0925 11:28:21.824207 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:21.824773 59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
I0925 11:28:21.824806 59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:21.824720 59935 retry.go:31] will retry after 488.071541ms: waiting for machine to come up
I0925 11:28:22.314305 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:22.314790 59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
I0925 11:28:22.314818 59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:22.314762 59935 retry.go:31] will retry after 945.003407ms: waiting for machine to come up
I0925 11:28:23.261899 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:23.262367 59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
I0925 11:28:23.262409 59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:23.262317 59935 retry.go:31] will retry after 1.092936458s: waiting for machine to come up
I0925 11:28:21.407022 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:23.905338 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:20.674171 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:22.674573 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:25.174611 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:24.991430 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:27.491751 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:24.357394 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:24.358014 59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
I0925 11:28:24.358072 59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:24.357975 59935 retry.go:31] will retry after 1.364274695s: waiting for machine to come up
I0925 11:28:25.723341 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:25.723819 59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
I0925 11:28:25.723848 59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:25.723762 59935 retry.go:31] will retry after 1.588423993s: waiting for machine to come up
I0925 11:28:27.313769 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:27.314265 59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
I0925 11:28:27.314299 59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:27.314211 59935 retry.go:31] will retry after 1.537433598s: waiting for machine to come up
I0925 11:28:28.853890 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:28.854449 59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
I0925 11:28:28.854472 59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:28.854378 59935 retry.go:31] will retry after 2.010519573s: waiting for machine to come up
I0925 11:28:26.405198 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:28.409892 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:27.673983 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:29.675459 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:29.492466 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:31.493901 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:30.867498 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:30.868057 59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
I0925 11:28:30.868084 59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:30.868021 59935 retry.go:31] will retry after 2.230830763s: waiting for machine to come up
I0925 11:28:33.100983 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:33.101572 59899 main.go:141] libmachine: (embed-certs-094323) DBG | unable to find current IP address of domain embed-certs-094323 in network mk-embed-certs-094323
I0925 11:28:33.101612 59899 main.go:141] libmachine: (embed-certs-094323) DBG | I0925 11:28:33.101515 59935 retry.go:31] will retry after 4.360204715s: waiting for machine to come up
I0925 11:28:30.903969 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:32.905907 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:32.173159 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:34.672934 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:33.990422 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:35.990706 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:37.992428 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:37.463184 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:37.463720 59899 main.go:141] libmachine: (embed-certs-094323) Found IP for machine: 192.168.39.111
I0925 11:28:37.463748 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has current primary IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:37.463757 59899 main.go:141] libmachine: (embed-certs-094323) Reserving static IP address...
I0925 11:28:37.464174 59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "embed-certs-094323", mac: "52:54:00:07:77:47", ip: "192.168.39.111"} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
I0925 11:28:37.464215 59899 main.go:141] libmachine: (embed-certs-094323) DBG | skip adding static IP to network mk-embed-certs-094323 - found existing host DHCP lease matching {name: "embed-certs-094323", mac: "52:54:00:07:77:47", ip: "192.168.39.111"}
I0925 11:28:37.464230 59899 main.go:141] libmachine: (embed-certs-094323) Reserved static IP address: 192.168.39.111
I0925 11:28:37.464248 59899 main.go:141] libmachine: (embed-certs-094323) Waiting for SSH to be available...
I0925 11:28:37.464264 59899 main.go:141] libmachine: (embed-certs-094323) DBG | Getting to WaitForSSH function...
I0925 11:28:37.466402 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:37.466816 59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
I0925 11:28:37.466843 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:37.467015 59899 main.go:141] libmachine: (embed-certs-094323) DBG | Using SSH client type: external
I0925 11:28:37.467053 59899 main.go:141] libmachine: (embed-certs-094323) DBG | Using SSH private key: /home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa (-rw-------)
I0925 11:28:37.467087 59899 main.go:141] libmachine: (embed-certs-094323) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa -p 22] /usr/bin/ssh <nil>}
I0925 11:28:37.467100 59899 main.go:141] libmachine: (embed-certs-094323) DBG | About to run SSH command:
I0925 11:28:37.467136 59899 main.go:141] libmachine: (embed-certs-094323) DBG | exit 0
I0925 11:28:37.556399 59899 main.go:141] libmachine: (embed-certs-094323) DBG | SSH cmd err, output: <nil>:
I0925 11:28:37.556778 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetConfigRaw
I0925 11:28:37.557414 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetIP
I0925 11:28:37.560030 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:37.560395 59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
I0925 11:28:37.560428 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:37.560640 59899 profile.go:148] Saving config to /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323/config.json ...
I0925 11:28:37.560845 59899 machine.go:88] provisioning docker machine ...
I0925 11:28:37.560864 59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
I0925 11:28:37.561073 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetMachineName
I0925 11:28:37.561221 59899 buildroot.go:166] provisioning hostname "embed-certs-094323"
I0925 11:28:37.561235 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetMachineName
I0925 11:28:37.561420 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
I0925 11:28:37.563597 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:37.563895 59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
I0925 11:28:37.563925 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:37.564030 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
I0925 11:28:37.564225 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
I0925 11:28:37.564405 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
I0925 11:28:37.564531 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
I0925 11:28:37.564705 59899 main.go:141] libmachine: Using SSH client type: native
I0925 11:28:37.565158 59899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.39.111 22 <nil> <nil>}
I0925 11:28:37.565180 59899 main.go:141] libmachine: About to run SSH command:
sudo hostname embed-certs-094323 && echo "embed-certs-094323" | sudo tee /etc/hostname
I0925 11:28:37.695364 59899 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-094323
I0925 11:28:37.695398 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
I0925 11:28:37.698664 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:37.699091 59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
I0925 11:28:37.699124 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:37.699344 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
I0925 11:28:37.699550 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
I0925 11:28:37.699717 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
I0925 11:28:37.699901 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
I0925 11:28:37.700108 59899 main.go:141] libmachine: Using SSH client type: native
I0925 11:28:37.700483 59899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.39.111 22 <nil> <nil>}
I0925 11:28:37.700503 59899 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sembed-certs-094323' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-094323/g' /etc/hosts;
else
echo '127.0.1.1 embed-certs-094323' | sudo tee -a /etc/hosts;
fi
fi
I0925 11:28:37.824658 59899 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0925 11:28:37.824711 59899 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17297-6032/.minikube CaCertPath:/home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17297-6032/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17297-6032/.minikube}
I0925 11:28:37.824734 59899 buildroot.go:174] setting up certificates
I0925 11:28:37.824745 59899 provision.go:83] configureAuth start
I0925 11:28:37.824759 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetMachineName
I0925 11:28:37.825074 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetIP
I0925 11:28:37.827695 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:37.828087 59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
I0925 11:28:37.828131 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:37.828262 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
I0925 11:28:37.830526 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:37.830866 59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
I0925 11:28:37.830897 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:37.830986 59899 provision.go:138] copyHostCerts
I0925 11:28:37.831038 59899 exec_runner.go:144] found /home/jenkins/minikube-integration/17297-6032/.minikube/ca.pem, removing ...
I0925 11:28:37.831050 59899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17297-6032/.minikube/ca.pem
I0925 11:28:37.831116 59899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17297-6032/.minikube/ca.pem (1078 bytes)
I0925 11:28:37.831199 59899 exec_runner.go:144] found /home/jenkins/minikube-integration/17297-6032/.minikube/cert.pem, removing ...
I0925 11:28:37.831208 59899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17297-6032/.minikube/cert.pem
I0925 11:28:37.831231 59899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17297-6032/.minikube/cert.pem (1123 bytes)
I0925 11:28:37.831315 59899 exec_runner.go:144] found /home/jenkins/minikube-integration/17297-6032/.minikube/key.pem, removing ...
I0925 11:28:37.831322 59899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17297-6032/.minikube/key.pem
I0925 11:28:37.831343 59899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17297-6032/.minikube/key.pem (1679 bytes)
I0925 11:28:37.831388 59899 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17297-6032/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca-key.pem org=jenkins.embed-certs-094323 san=[192.168.39.111 192.168.39.111 localhost 127.0.0.1 minikube embed-certs-094323]
I0925 11:28:37.908612 59899 provision.go:172] copyRemoteCerts
I0925 11:28:37.908700 59899 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0925 11:28:37.908735 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
I0925 11:28:37.911729 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:37.912109 59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
I0925 11:28:37.912140 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:37.912334 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
I0925 11:28:37.912534 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
I0925 11:28:37.912716 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
I0925 11:28:37.912845 59899 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa Username:docker}
I0925 11:28:37.998547 59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0925 11:28:38.026509 59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
I0925 11:28:38.050201 59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0925 11:28:38.074649 59899 provision.go:86] duration metric: configureAuth took 249.890915ms
I0925 11:28:38.074676 59899 buildroot.go:189] setting minikube options for container-runtime
I0925 11:28:38.074944 59899 config.go:182] Loaded profile config "embed-certs-094323": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0925 11:28:38.074975 59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
I0925 11:28:38.075242 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
I0925 11:28:38.078170 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:38.078528 59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
I0925 11:28:38.078567 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:38.078795 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
I0925 11:28:38.078989 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
I0925 11:28:38.079174 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
I0925 11:28:38.079356 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
I0925 11:28:38.079539 59899 main.go:141] libmachine: Using SSH client type: native
I0925 11:28:38.079964 59899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.39.111 22 <nil> <nil>}
I0925 11:28:38.079984 59899 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0925 11:28:38.198741 59899 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0925 11:28:38.198765 59899 buildroot.go:70] root file system type: tmpfs
I0925 11:28:38.198890 59899 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0925 11:28:38.198915 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
I0925 11:28:38.201807 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:38.202182 59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
I0925 11:28:38.202213 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:38.202351 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
I0925 11:28:38.202547 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
I0925 11:28:38.202711 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
I0925 11:28:38.202847 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
I0925 11:28:38.202992 59899 main.go:141] libmachine: Using SSH client type: native
I0925 11:28:38.203346 59899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.39.111 22 <nil> <nil>}
I0925 11:28:38.203422 59899 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0925 11:28:38.330031 59899 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0925 11:28:38.330061 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
I0925 11:28:38.333195 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:38.333537 59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
I0925 11:28:38.333568 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:38.333754 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
I0925 11:28:38.333924 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
I0925 11:28:38.334109 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
I0925 11:28:38.334259 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
I0925 11:28:38.334428 59899 main.go:141] libmachine: Using SSH client type: native
I0925 11:28:38.334869 59899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.39.111 22 <nil> <nil>}
I0925 11:28:38.334898 59899 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0925 11:28:35.403941 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:37.405325 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:36.673537 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:38.675023 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:39.250696 59899 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0925 11:28:39.250732 59899 machine.go:91] provisioned docker machine in 1.689868908s
I0925 11:28:39.250752 59899 start.go:300] post-start starting for "embed-certs-094323" (driver="kvm2")
I0925 11:28:39.250766 59899 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0925 11:28:39.250786 59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
I0925 11:28:39.251224 59899 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0925 11:28:39.251260 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
I0925 11:28:39.254399 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:39.254904 59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
I0925 11:28:39.254937 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:39.255093 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
I0925 11:28:39.255261 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
I0925 11:28:39.255432 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
I0925 11:28:39.255612 59899 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa Username:docker}
I0925 11:28:39.350663 59899 ssh_runner.go:195] Run: cat /etc/os-release
I0925 11:28:39.357361 59899 info.go:137] Remote host: Buildroot 2021.02.12
I0925 11:28:39.357388 59899 filesync.go:126] Scanning /home/jenkins/minikube-integration/17297-6032/.minikube/addons for local assets ...
I0925 11:28:39.357464 59899 filesync.go:126] Scanning /home/jenkins/minikube-integration/17297-6032/.minikube/files for local assets ...
I0925 11:28:39.357582 59899 filesync.go:149] local asset: /home/jenkins/minikube-integration/17297-6032/.minikube/files/etc/ssl/certs/132132.pem -> 132132.pem in /etc/ssl/certs
I0925 11:28:39.357712 59899 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0925 11:28:39.374752 59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/files/etc/ssl/certs/132132.pem --> /etc/ssl/certs/132132.pem (1708 bytes)
I0925 11:28:39.407365 59899 start.go:303] post-start completed in 156.599445ms
I0925 11:28:39.407390 59899 fix.go:56] fixHost completed within 20.265737349s
I0925 11:28:39.407412 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
I0925 11:28:39.409869 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:39.410204 59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
I0925 11:28:39.410246 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:39.410351 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
I0925 11:28:39.410526 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
I0925 11:28:39.410672 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
I0925 11:28:39.410817 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
I0925 11:28:39.411004 59899 main.go:141] libmachine: Using SSH client type: native
I0925 11:28:39.411443 59899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil> [] 0s} 192.168.39.111 22 <nil> <nil>}
I0925 11:28:39.411457 59899 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0925 11:28:39.525878 59899 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695641319.473578694
I0925 11:28:39.525906 59899 fix.go:206] guest clock: 1695641319.473578694
I0925 11:28:39.525916 59899 fix.go:219] Guest: 2023-09-25 11:28:39.473578694 +0000 UTC Remote: 2023-09-25 11:28:39.407394176 +0000 UTC m=+20.400726255 (delta=66.184518ms)
I0925 11:28:39.525941 59899 fix.go:190] guest clock delta is within tolerance: 66.184518ms
I0925 11:28:39.525949 59899 start.go:83] releasing machines lock for "embed-certs-094323", held for 20.384309776s
I0925 11:28:39.525980 59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
I0925 11:28:39.526255 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetIP
I0925 11:28:39.528977 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:39.529347 59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
I0925 11:28:39.529375 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:39.529553 59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
I0925 11:28:39.530157 59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
I0925 11:28:39.530328 59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
I0925 11:28:39.530430 59899 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0925 11:28:39.530480 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
I0925 11:28:39.530741 59899 ssh_runner.go:195] Run: cat /version.json
I0925 11:28:39.530766 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
I0925 11:28:39.533347 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:39.533598 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:39.533796 59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
I0925 11:28:39.533834 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:39.534008 59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
I0925 11:28:39.534017 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
I0925 11:28:39.534033 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:39.534116 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
I0925 11:28:39.534328 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
I0925 11:28:39.534397 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
I0925 11:28:39.534497 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
I0925 11:28:39.534546 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
I0925 11:28:39.534701 59899 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa Username:docker}
I0925 11:28:39.534716 59899 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa Username:docker}
I0925 11:28:39.619280 59899 ssh_runner.go:195] Run: systemctl --version
I0925 11:28:39.651081 59899 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0925 11:28:39.656908 59899 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0925 11:28:39.656977 59899 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0925 11:28:39.674233 59899 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0925 11:28:39.674259 59899 start.go:469] detecting cgroup driver to use...
I0925 11:28:39.674415 59899 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0925 11:28:39.693891 59899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0925 11:28:39.704196 59899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0925 11:28:39.714537 59899 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0925 11:28:39.714587 59899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0925 11:28:39.724833 59899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0925 11:28:39.734476 59899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0925 11:28:39.744763 59899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0925 11:28:39.755865 59899 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0925 11:28:39.765565 59899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0925 11:28:39.775652 59899 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0925 11:28:39.785628 59899 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0925 11:28:39.794828 59899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0925 11:28:39.915710 59899 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0925 11:28:39.933084 59899 start.go:469] detecting cgroup driver to use...
I0925 11:28:39.933164 59899 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0925 11:28:39.949304 59899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0925 11:28:39.963709 59899 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0925 11:28:39.980784 59899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0925 11:28:39.994887 59899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0925 11:28:40.007408 59899 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0925 11:28:40.034805 59899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0925 11:28:40.047786 59899 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0925 11:28:40.066171 59899 ssh_runner.go:195] Run: which cri-dockerd
I0925 11:28:40.070494 59899 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0925 11:28:40.078000 59899 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0925 11:28:40.093462 59899 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0925 11:28:40.197902 59899 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0925 11:28:40.313798 59899 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
I0925 11:28:40.313947 59899 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0925 11:28:40.330472 59899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0925 11:28:40.443989 59899 ssh_runner.go:195] Run: sudo systemctl restart docker
I0925 11:28:41.943902 59899 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.49987353s)
I0925 11:28:41.943995 59899 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0925 11:28:42.063894 59899 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0925 11:28:42.177577 59899 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0925 11:28:42.291042 59899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0925 11:28:42.407796 59899 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0925 11:28:42.429673 59899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0925 11:28:42.553611 59899 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
I0925 11:28:42.637258 59899 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0925 11:28:42.637336 59899 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0925 11:28:42.643315 59899 start.go:537] Will wait 60s for crictl version
I0925 11:28:42.643380 59899 ssh_runner.go:195] Run: which crictl
I0925 11:28:42.647521 59899 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0925 11:28:42.709061 59899 start.go:553] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 24.0.6
RuntimeApiVersion: v1
I0925 11:28:42.709123 59899 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0925 11:28:42.735005 59899 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0925 11:28:39.992653 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:42.493405 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:42.763193 59899 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
I0925 11:28:42.763239 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetIP
I0925 11:28:42.766116 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:42.766453 59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
I0925 11:28:42.766487 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:28:42.766740 59899 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0925 11:28:42.770645 59899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0925 11:28:42.782793 59899 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
I0925 11:28:42.782837 59899 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0925 11:28:42.805110 59899 docker.go:664] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/coredns/coredns:v1.10.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
-- /stdout --
I0925 11:28:42.805135 59899 docker.go:594] Images already preloaded, skipping extraction
I0925 11:28:42.805190 59899 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0925 11:28:42.824840 59899 docker.go:664] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/coredns/coredns:v1.10.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
-- /stdout --
I0925 11:28:42.824876 59899 cache_images.go:84] Images are preloaded, skipping loading
I0925 11:28:42.824941 59899 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0925 11:28:42.858255 59899 cni.go:84] Creating CNI manager for ""
I0925 11:28:42.858285 59899 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0925 11:28:42.858303 59899 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0925 11:28:42.858319 59899 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.111 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-094323 NodeName:embed-certs-094323 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0925 11:28:42.858443 59899 kubeadm.go:181] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.111
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "embed-certs-094323"
kubeletExtraArgs:
node-ip: 192.168.39.111
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.111"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.28.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0925 11:28:42.858508 59899 kubeadm.go:976] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=embed-certs-094323 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.111
[Install]
config:
{KubernetesVersion:v1.28.2 ClusterName:embed-certs-094323 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0925 11:28:42.858563 59899 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
I0925 11:28:42.868791 59899 binaries.go:44] Found k8s binaries, skipping transfer
I0925 11:28:42.868861 59899 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0925 11:28:42.878094 59899 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
I0925 11:28:42.894185 59899 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0925 11:28:42.910390 59899 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2107 bytes)
I0925 11:28:42.929194 59899 ssh_runner.go:195] Run: grep 192.168.39.111 control-plane.minikube.internal$ /etc/hosts
I0925 11:28:42.933290 59899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.111 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0925 11:28:42.946061 59899 certs.go:56] Setting up /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323 for IP: 192.168.39.111
I0925 11:28:42.946095 59899 certs.go:190] acquiring lock for shared ca certs: {Name:mkb77fd8e605e52ea68ab5351af7de9da389c0d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0925 11:28:42.946253 59899 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17297-6032/.minikube/ca.key
I0925 11:28:42.946292 59899 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17297-6032/.minikube/proxy-client-ca.key
I0925 11:28:42.946354 59899 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323/client.key
I0925 11:28:42.946414 59899 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323/apiserver.key.f4aa454f
I0925 11:28:42.946448 59899 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323/proxy-client.key
I0925 11:28:42.946581 59899 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/home/jenkins/minikube-integration/17297-6032/.minikube/certs/13213.pem (1338 bytes)
W0925 11:28:42.946628 59899 certs.go:433] ignoring /home/jenkins/minikube-integration/17297-6032/.minikube/certs/home/jenkins/minikube-integration/17297-6032/.minikube/certs/13213_empty.pem, impossibly tiny 0 bytes
I0925 11:28:42.946648 59899 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca-key.pem (1675 bytes)
I0925 11:28:42.946675 59899 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/home/jenkins/minikube-integration/17297-6032/.minikube/certs/ca.pem (1078 bytes)
I0925 11:28:42.946706 59899 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/home/jenkins/minikube-integration/17297-6032/.minikube/certs/cert.pem (1123 bytes)
I0925 11:28:42.946743 59899 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-6032/.minikube/certs/home/jenkins/minikube-integration/17297-6032/.minikube/certs/key.pem (1679 bytes)
I0925 11:28:42.946793 59899 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-6032/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17297-6032/.minikube/files/etc/ssl/certs/132132.pem (1708 bytes)
I0925 11:28:42.947417 59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0925 11:28:42.970517 59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0925 11:28:42.995598 59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0925 11:28:43.019025 59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/profiles/embed-certs-094323/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0925 11:28:43.044246 59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0925 11:28:43.068806 59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0925 11:28:43.093317 59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0925 11:28:43.117196 59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0925 11:28:43.140309 59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/certs/13213.pem --> /usr/share/ca-certificates/13213.pem (1338 bytes)
I0925 11:28:43.164129 59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/files/etc/ssl/certs/132132.pem --> /usr/share/ca-certificates/132132.pem (1708 bytes)
I0925 11:28:43.187747 59899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-6032/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0925 11:28:43.211759 59899 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0925 11:28:43.229751 59899 ssh_runner.go:195] Run: openssl version
I0925 11:28:43.235370 59899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13213.pem && ln -fs /usr/share/ca-certificates/13213.pem /etc/ssl/certs/13213.pem"
I0925 11:28:43.244462 59899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13213.pem
I0925 11:28:43.249084 59899 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 25 10:38 /usr/share/ca-certificates/13213.pem
I0925 11:28:43.249131 59899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13213.pem
I0925 11:28:43.254522 59899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13213.pem /etc/ssl/certs/51391683.0"
I0925 11:28:43.263996 59899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/132132.pem && ln -fs /usr/share/ca-certificates/132132.pem /etc/ssl/certs/132132.pem"
I0925 11:28:43.273424 59899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/132132.pem
I0925 11:28:43.278155 59899 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 25 10:38 /usr/share/ca-certificates/132132.pem
I0925 11:28:43.278194 59899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/132132.pem
I0925 11:28:43.283762 59899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/132132.pem /etc/ssl/certs/3ec20f2e.0"
I0925 11:28:43.293817 59899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0925 11:28:43.303828 59899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0925 11:28:43.309173 59899 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 25 10:34 /usr/share/ca-certificates/minikubeCA.pem
I0925 11:28:43.309215 59899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0925 11:28:43.315555 59899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0925 11:28:43.325092 59899 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
I0925 11:28:43.329555 59899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0925 11:28:43.335420 59899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0925 11:28:43.341663 59899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0925 11:28:43.347218 59899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0925 11:28:43.352934 59899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0925 11:28:43.359116 59899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0925 11:28:43.364415 59899 kubeadm.go:404] StartCluster: {Name:embed-certs-094323 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.2 ClusterName:embed-certs-094323 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.111 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Ne
twork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
I0925 11:28:43.364539 59899 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0925 11:28:43.383931 59899 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0925 11:28:43.393096 59899 kubeadm.go:419] found existing configuration files, will attempt cluster restart
I0925 11:28:43.393114 59899 kubeadm.go:636] restartCluster start
I0925 11:28:43.393149 59899 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0925 11:28:43.402414 59899 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0925 11:28:43.403165 59899 kubeconfig.go:135] verify returned: extract IP: "embed-certs-094323" does not appear in /home/jenkins/minikube-integration/17297-6032/kubeconfig
I0925 11:28:43.403590 59899 kubeconfig.go:146] "embed-certs-094323" context is missing from /home/jenkins/minikube-integration/17297-6032/kubeconfig - will repair!
I0925 11:28:43.404176 59899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-6032/kubeconfig: {Name:mk2e6cdf75b548522ce59dabb15b91a1d0336907 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0925 11:28:43.405944 59899 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0925 11:28:43.413960 59899 api_server.go:166] Checking apiserver status ...
I0925 11:28:43.414004 59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:28:43.424035 59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:28:43.424049 59899 api_server.go:166] Checking apiserver status ...
I0925 11:28:43.424076 59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:28:43.435299 59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:28:43.935935 59899 api_server.go:166] Checking apiserver status ...
I0925 11:28:43.936031 59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:28:43.947516 59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:28:39.905311 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:41.908598 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:44.404783 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:41.172736 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:43.174138 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:45.174205 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:44.990934 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:46.991805 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:44.435537 59899 api_server.go:166] Checking apiserver status ...
I0925 11:28:44.435624 59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:28:44.447609 59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:28:44.936220 59899 api_server.go:166] Checking apiserver status ...
I0925 11:28:44.936386 59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:28:44.948140 59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:28:45.435733 59899 api_server.go:166] Checking apiserver status ...
I0925 11:28:45.435829 59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:28:45.448013 59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:28:45.935443 59899 api_server.go:166] Checking apiserver status ...
I0925 11:28:45.935535 59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:28:45.947333 59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:28:46.435451 59899 api_server.go:166] Checking apiserver status ...
I0925 11:28:46.435515 59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:28:46.447174 59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:28:46.935705 59899 api_server.go:166] Checking apiserver status ...
I0925 11:28:46.935782 59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:28:46.947562 59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:28:47.436134 59899 api_server.go:166] Checking apiserver status ...
I0925 11:28:47.436202 59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:28:47.447762 59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:28:47.936080 59899 api_server.go:166] Checking apiserver status ...
I0925 11:28:47.936141 59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:28:47.947832 59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:28:48.435362 59899 api_server.go:166] Checking apiserver status ...
I0925 11:28:48.435430 59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:28:48.446887 59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:28:48.935379 59899 api_server.go:166] Checking apiserver status ...
I0925 11:28:48.935477 59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:28:48.948793 59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:28:46.904475 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:48.905486 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:47.176223 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:49.674353 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:49.491562 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:51.492069 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:53.492471 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:49.436282 59899 api_server.go:166] Checking apiserver status ...
I0925 11:28:49.436396 59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:28:49.447719 59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:28:49.936050 59899 api_server.go:166] Checking apiserver status ...
I0925 11:28:49.936137 59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:28:49.948346 59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:28:50.435443 59899 api_server.go:166] Checking apiserver status ...
I0925 11:28:50.435524 59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:28:50.446725 59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:28:50.936401 59899 api_server.go:166] Checking apiserver status ...
I0925 11:28:50.936479 59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:28:50.948716 59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:28:51.436316 59899 api_server.go:166] Checking apiserver status ...
I0925 11:28:51.436391 59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:28:51.447984 59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:28:51.936106 59899 api_server.go:166] Checking apiserver status ...
I0925 11:28:51.936183 59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:28:51.951846 59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:28:52.435363 59899 api_server.go:166] Checking apiserver status ...
I0925 11:28:52.435459 59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:28:52.447499 59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:28:52.936093 59899 api_server.go:166] Checking apiserver status ...
I0925 11:28:52.936170 59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0925 11:28:52.948743 59899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0925 11:28:53.414466 59899 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
I0925 11:28:53.414503 59899 kubeadm.go:1128] stopping kube-system containers ...
I0925 11:28:53.414561 59899 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0925 11:28:53.436706 59899 docker.go:463] Stopping containers: [5433505b8c84 5955297b2651 0b460a10ea1f 8f77078f7165 339fcb3416d5 b8e7d5af3c42 41f8be78a4f7 00a2998c5488 55442ce14fe2 a9a363aa2856 e1118b32fbd4 dcf727ef2c38 d7715df7bd8b fc60135d9ddb 56727523c1f3]
I0925 11:28:53.436785 59899 ssh_runner.go:195] Run: docker stop 5433505b8c84 5955297b2651 0b460a10ea1f 8f77078f7165 339fcb3416d5 b8e7d5af3c42 41f8be78a4f7 00a2998c5488 55442ce14fe2 a9a363aa2856 e1118b32fbd4 dcf727ef2c38 d7715df7bd8b fc60135d9ddb 56727523c1f3
I0925 11:28:53.460993 59899 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0925 11:28:53.476266 59899 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0925 11:28:53.485682 59899 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0925 11:28:53.485753 59899 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0925 11:28:53.495238 59899 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0925 11:28:53.495259 59899 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0925 11:28:53.625292 59899 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0925 11:28:51.404218 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:53.404644 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:52.173594 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:54.173762 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:55.992677 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:58.491954 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:54.299318 59899 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0925 11:28:54.496012 59899 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0925 11:28:54.595147 59899 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0925 11:28:54.679425 59899 api_server.go:52] waiting for apiserver process to appear ...
I0925 11:28:54.679506 59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0925 11:28:54.698114 59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0925 11:28:55.211538 59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0925 11:28:55.711672 59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0925 11:28:56.211025 59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0925 11:28:56.711636 59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0925 11:28:56.734459 59899 api_server.go:72] duration metric: took 2.055031465s to wait for apiserver process to appear ...
I0925 11:28:56.734482 59899 api_server.go:88] waiting for apiserver healthz status ...
I0925 11:28:56.734499 59899 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
I0925 11:28:56.735092 59899 api_server.go:269] stopped: https://192.168.39.111:8443/healthz: Get "https://192.168.39.111:8443/healthz": dial tcp 192.168.39.111:8443: connect: connection refused
I0925 11:28:56.735125 59899 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
I0925 11:28:56.735727 59899 api_server.go:269] stopped: https://192.168.39.111:8443/healthz: Get "https://192.168.39.111:8443/healthz": dial tcp 192.168.39.111:8443: connect: connection refused
I0925 11:28:57.236460 59899 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
I0925 11:28:55.405884 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:57.904799 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:56.673626 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:28:58.673704 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:00.709537 59899 api_server.go:279] https://192.168.39.111:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0925 11:29:00.709569 59899 api_server.go:103] status: https://192.168.39.111:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0925 11:29:00.709581 59899 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
I0925 11:29:00.795585 59899 api_server.go:279] https://192.168.39.111:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0925 11:29:00.795613 59899 api_server.go:103] status: https://192.168.39.111:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0925 11:29:00.795624 59899 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
I0925 11:29:00.911357 59899 api_server.go:279] https://192.168.39.111:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[-]poststarthook/start-apiextensions-controllers failed: reason withheld
[-]poststarthook/crd-informer-synced failed: reason withheld
[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
[+]poststarthook/start-system-namespaces-controller ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[-]autoregister-completion failed: reason withheld
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
W0925 11:29:00.911393 59899 api_server.go:103] status: https://192.168.39.111:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[-]poststarthook/start-apiextensions-controllers failed: reason withheld
[-]poststarthook/crd-informer-synced failed: reason withheld
[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
[+]poststarthook/start-system-namespaces-controller ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[-]autoregister-completion failed: reason withheld
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
I0925 11:29:01.236809 59899 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
I0925 11:29:01.242260 59899 api_server.go:279] https://192.168.39.111:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
W0925 11:29:01.242286 59899 api_server.go:103] status: https://192.168.39.111:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
I0925 11:29:01.735856 59899 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
I0925 11:29:01.743534 59899 api_server.go:279] https://192.168.39.111:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
W0925 11:29:01.743563 59899 api_server.go:103] status: https://192.168.39.111:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
I0925 11:29:02.236812 59899 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
I0925 11:29:02.247395 59899 api_server.go:279] https://192.168.39.111:8443/healthz returned 200:
ok
I0925 11:29:02.257253 59899 api_server.go:141] control plane version: v1.28.2
I0925 11:29:02.257277 59899 api_server.go:131] duration metric: took 5.522789199s to wait for apiserver health ...
I0925 11:29:02.257286 59899 cni.go:84] Creating CNI manager for ""
I0925 11:29:02.257297 59899 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0925 11:29:02.258988 59899 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0925 11:29:00.496638 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:02.992616 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:02.260493 59899 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0925 11:29:02.275303 59899 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
I0925 11:29:02.297272 59899 system_pods.go:43] waiting for kube-system pods to appear ...
I0925 11:29:02.308818 59899 system_pods.go:59] 8 kube-system pods found
I0925 11:29:02.308855 59899 system_pods.go:61] "coredns-5dd5756b68-7kfz5" [9225f684-4ad2-462b-a20b-13dd27aad56f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:29:02.308868 59899 system_pods.go:61] "etcd-embed-certs-094323" [5603d9a0-390a-4cf1-ad8f-a976016d96e6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0925 11:29:02.308879 59899 system_pods.go:61] "kube-apiserver-embed-certs-094323" [eb928fb0-77a3-45c5-81ce-03ffcb288548] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0925 11:29:02.308889 59899 system_pods.go:61] "kube-controller-manager-embed-certs-094323" [8ee4e42e-367a-4be8-9787-c6eb13913d8e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0925 11:29:02.308900 59899 system_pods.go:61] "kube-proxy-5k6vp" [b5a3fb6d-bc10-4cde-a1f1-8c57a1fa480b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:29:02.308911 59899 system_pods.go:61] "kube-scheduler-embed-certs-094323" [4e15edd2-b5f1-4441-b940-2055f20354d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0925 11:29:02.308926 59899 system_pods.go:61] "metrics-server-57f55c9bc5-xcns4" [32a1d71d-7f4d-466a-b745-d2fdf6a88570] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:29:02.308942 59899 system_pods.go:61] "storage-provisioner" [91ac60cc-4154-4e62-aa3e-6c492764d7f7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:29:02.308955 59899 system_pods.go:74] duration metric: took 11.663759ms to wait for pod list to return data ...
I0925 11:29:02.308969 59899 node_conditions.go:102] verifying NodePressure condition ...
I0925 11:29:02.315279 59899 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0925 11:29:02.315316 59899 node_conditions.go:123] node cpu capacity is 2
I0925 11:29:02.315329 59899 node_conditions.go:105] duration metric: took 6.35463ms to run NodePressure ...
I0925 11:29:02.315351 59899 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0925 11:29:02.598238 59899 kubeadm.go:772] waiting for restarted kubelet to initialise ...
I0925 11:29:02.603645 59899 kubeadm.go:787] kubelet initialised
I0925 11:29:02.603673 59899 kubeadm.go:788] duration metric: took 5.409805ms waiting for restarted kubelet to initialise ...
I0925 11:29:02.603682 59899 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0925 11:29:02.609652 59899 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-7kfz5" in "kube-system" namespace to be "Ready" ...
I0925 11:29:02.616919 59899 pod_ready.go:97] node "embed-certs-094323" hosting pod "coredns-5dd5756b68-7kfz5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-094323" has status "Ready":"False"
I0925 11:29:02.616945 59899 pod_ready.go:81] duration metric: took 7.267055ms waiting for pod "coredns-5dd5756b68-7kfz5" in "kube-system" namespace to be "Ready" ...
E0925 11:29:02.616957 59899 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-094323" hosting pod "coredns-5dd5756b68-7kfz5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-094323" has status "Ready":"False"
I0925 11:29:02.616966 59899 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
I0925 11:29:02.626927 59899 pod_ready.go:97] node "embed-certs-094323" hosting pod "etcd-embed-certs-094323" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-094323" has status "Ready":"False"
I0925 11:29:02.626952 59899 pod_ready.go:81] duration metric: took 9.977984ms waiting for pod "etcd-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
E0925 11:29:02.626964 59899 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-094323" hosting pod "etcd-embed-certs-094323" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-094323" has status "Ready":"False"
I0925 11:29:02.626975 59899 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
I0925 11:29:02.635040 59899 pod_ready.go:97] node "embed-certs-094323" hosting pod "kube-apiserver-embed-certs-094323" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-094323" has status "Ready":"False"
I0925 11:29:02.635057 59899 pod_ready.go:81] duration metric: took 8.069751ms waiting for pod "kube-apiserver-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
E0925 11:29:02.635065 59899 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-094323" hosting pod "kube-apiserver-embed-certs-094323" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-094323" has status "Ready":"False"
I0925 11:29:02.635071 59899 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
I0925 11:29:02.701570 59899 pod_ready.go:97] node "embed-certs-094323" hosting pod "kube-controller-manager-embed-certs-094323" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-094323" has status "Ready":"False"
I0925 11:29:02.701594 59899 pod_ready.go:81] duration metric: took 66.51566ms waiting for pod "kube-controller-manager-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
E0925 11:29:02.701604 59899 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-094323" hosting pod "kube-controller-manager-embed-certs-094323" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-094323" has status "Ready":"False"
I0925 11:29:02.701614 59899 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5k6vp" in "kube-system" namespace to be "Ready" ...
I0925 11:29:00.404282 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:02.407062 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:00.674496 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:02.676016 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:04.677117 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:05.005683 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:07.491820 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:04.513619 59899 pod_ready.go:92] pod "kube-proxy-5k6vp" in "kube-system" namespace has status "Ready":"True"
I0925 11:29:04.513641 59899 pod_ready.go:81] duration metric: took 1.812019136s waiting for pod "kube-proxy-5k6vp" in "kube-system" namespace to be "Ready" ...
I0925 11:29:04.513650 59899 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
I0925 11:29:06.610704 59899 pod_ready.go:102] pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:08.610973 59899 pod_ready.go:102] pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:04.905976 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:07.404291 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:09.408011 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:07.173790 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:09.673547 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:09.492854 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:11.991906 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:11.110562 59899 pod_ready.go:102] pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:13.112908 59899 pod_ready.go:102] pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:11.905538 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:14.404450 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:12.173257 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:14.673817 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:14.492243 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:16.991655 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:14.610905 59899 pod_ready.go:92] pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace has status "Ready":"True"
I0925 11:29:14.610923 59899 pod_ready.go:81] duration metric: took 10.097268131s waiting for pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
I0925 11:29:14.610932 59899 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace to be "Ready" ...
I0925 11:29:16.629749 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:16.412718 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:18.906798 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:17.173554 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:19.674607 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:18.992367 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:21.491588 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:19.130001 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:21.629643 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:21.403543 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:23.405654 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:22.173742 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:24.674422 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:23.992075 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:26.491409 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:28.492221 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:24.129530 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:26.629049 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:28.629817 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:25.909201 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:28.403475 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:27.174742 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:29.673522 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:30.990733 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:33.492080 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:31.128865 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:33.129900 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:30.405115 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:32.904179 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:31.674133 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:34.173962 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:35.990697 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:37.991964 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:35.629757 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:37.630073 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:34.905517 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:37.405590 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:36.175249 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:38.674512 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:40.490747 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:42.991730 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:40.129932 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:42.628523 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:39.904204 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:41.905925 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:44.406994 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:41.172242 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:43.173423 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:45.174163 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:44.992082 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:47.491243 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:44.629935 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:47.129139 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:46.904285 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:49.409716 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:47.174974 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:49.673662 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:49.993800 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:52.491813 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:49.130049 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:51.628211 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:53.629350 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:51.905344 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:53.905370 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:52.173811 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:54.673161 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:54.493022 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:56.993331 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:55.629518 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:57.629571 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:55.909272 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:58.403659 57752 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:58.407567 57752 pod_ready.go:81] duration metric: took 4m0.000815308s waiting for pod "metrics-server-57f55c9bc5-p2tvr" in "kube-system" namespace to be "Ready" ...
E0925 11:29:58.407592 57752 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
I0925 11:29:58.407601 57752 pod_ready.go:38] duration metric: took 4m6.831828709s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0925 11:29:58.407622 57752 api_server.go:52] waiting for apiserver process to appear ...
I0925 11:29:58.407686 57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0925 11:29:58.442532 57752 logs.go:284] 2 containers: [ae812308b161 50dd56b362e6]
I0925 11:29:58.442627 57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0925 11:29:58.466398 57752 logs.go:284] 2 containers: [f056fda5e129 771fdc2d4d72]
I0925 11:29:58.466474 57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0925 11:29:58.488629 57752 logs.go:284] 2 containers: [f4f7d2a397a7 19c28e83f034]
I0925 11:29:58.488710 57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0925 11:29:58.515985 57752 logs.go:284] 2 containers: [dd7534763296 0e6944ef9ef1]
I0925 11:29:58.516069 57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0925 11:29:58.551483 57752 logs.go:284] 2 containers: [ba51b7a85dfa c3c77640a284]
I0925 11:29:58.551593 57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0925 11:29:58.575447 57752 logs.go:284] 2 containers: [f5a2c4593b48 2b682a364274]
I0925 11:29:58.575518 57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0925 11:29:58.595332 57752 logs.go:284] 0 containers: []
W0925 11:29:58.595354 57752 logs.go:286] No container was found matching "kindnet"
I0925 11:29:58.595406 57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0925 11:29:58.616993 57752 logs.go:284] 1 containers: [146977376d21]
I0925 11:29:58.617053 57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0925 11:29:58.641655 57752 logs.go:284] 2 containers: [a296191b186b e152c53b10e3]
I0925 11:29:58.641682 57752 logs.go:123] Gathering logs for kube-controller-manager [f5a2c4593b48] ...
I0925 11:29:58.641692 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5a2c4593b48"
I0925 11:29:58.697709 57752 logs.go:123] Gathering logs for storage-provisioner [e152c53b10e3] ...
I0925 11:29:58.697746 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e152c53b10e3"
I0925 11:29:58.720902 57752 logs.go:123] Gathering logs for container status ...
I0925 11:29:58.720930 57752 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0925 11:29:58.812571 57752 logs.go:123] Gathering logs for dmesg ...
I0925 11:29:58.812609 57752 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0925 11:29:58.833650 57752 logs.go:123] Gathering logs for kube-apiserver [50dd56b362e6] ...
I0925 11:29:58.833678 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dd56b362e6"
I0925 11:29:58.888959 57752 logs.go:123] Gathering logs for etcd [f056fda5e129] ...
I0925 11:29:58.888999 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f056fda5e129"
I0925 11:29:58.924906 57752 logs.go:123] Gathering logs for coredns [f4f7d2a397a7] ...
I0925 11:29:58.924934 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4f7d2a397a7"
I0925 11:29:58.951722 57752 logs.go:123] Gathering logs for storage-provisioner [a296191b186b] ...
I0925 11:29:58.951750 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a296191b186b"
I0925 11:29:58.975890 57752 logs.go:123] Gathering logs for Docker ...
I0925 11:29:58.975912 57752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0925 11:29:59.042048 57752 logs.go:123] Gathering logs for kube-apiserver [ae812308b161] ...
I0925 11:29:59.042077 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae812308b161"
I0925 11:29:59.090056 57752 logs.go:123] Gathering logs for coredns [19c28e83f034] ...
I0925 11:29:59.090083 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19c28e83f034"
I0925 11:29:59.118231 57752 logs.go:123] Gathering logs for kube-proxy [ba51b7a85dfa] ...
I0925 11:29:59.118257 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba51b7a85dfa"
I0925 11:29:59.141561 57752 logs.go:123] Gathering logs for kube-proxy [c3c77640a284] ...
I0925 11:29:59.141584 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3c77640a284"
I0925 11:29:59.168388 57752 logs.go:123] Gathering logs for kubernetes-dashboard [146977376d21] ...
I0925 11:29:59.168420 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 146977376d21"
I0925 11:29:59.202331 57752 logs.go:123] Gathering logs for kubelet ...
I0925 11:29:59.202355 57752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0925 11:29:59.278282 57752 logs.go:123] Gathering logs for describe nodes ...
I0925 11:29:59.278317 57752 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0925 11:29:59.431326 57752 logs.go:123] Gathering logs for kube-scheduler [dd7534763296] ...
I0925 11:29:59.431356 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd7534763296"
I0925 11:29:59.462487 57752 logs.go:123] Gathering logs for kube-controller-manager [2b682a364274] ...
I0925 11:29:59.462516 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b682a364274"
I0925 11:29:59.506895 57752 logs.go:123] Gathering logs for etcd [771fdc2d4d72] ...
I0925 11:29:59.506927 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 771fdc2d4d72"
I0925 11:29:59.551311 57752 logs.go:123] Gathering logs for kube-scheduler [0e6944ef9ef1] ...
I0925 11:29:59.551351 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6944ef9ef1"
I0925 11:29:56.674157 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:59.174193 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:29:59.490645 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:01.491108 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:03.491826 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:00.130429 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:02.630390 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:02.085037 57752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0925 11:30:02.106600 57752 api_server.go:72] duration metric: took 4m14.069395428s to wait for apiserver process to appear ...
I0925 11:30:02.106631 57752 api_server.go:88] waiting for apiserver healthz status ...
I0925 11:30:02.106709 57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0925 11:30:02.131534 57752 logs.go:284] 2 containers: [ae812308b161 50dd56b362e6]
I0925 11:30:02.131610 57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0925 11:30:02.154915 57752 logs.go:284] 2 containers: [f056fda5e129 771fdc2d4d72]
I0925 11:30:02.154979 57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0925 11:30:02.178047 57752 logs.go:284] 2 containers: [f4f7d2a397a7 19c28e83f034]
I0925 11:30:02.178108 57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0925 11:30:02.202658 57752 logs.go:284] 2 containers: [dd7534763296 0e6944ef9ef1]
I0925 11:30:02.202754 57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0925 11:30:02.224819 57752 logs.go:284] 2 containers: [ba51b7a85dfa c3c77640a284]
I0925 11:30:02.224908 57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0925 11:30:02.246587 57752 logs.go:284] 2 containers: [f5a2c4593b48 2b682a364274]
I0925 11:30:02.246650 57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0925 11:30:02.267013 57752 logs.go:284] 0 containers: []
W0925 11:30:02.267037 57752 logs.go:286] No container was found matching "kindnet"
I0925 11:30:02.267090 57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0925 11:30:02.286403 57752 logs.go:284] 2 containers: [a296191b186b e152c53b10e3]
I0925 11:30:02.286476 57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0925 11:30:02.307111 57752 logs.go:284] 1 containers: [146977376d21]
I0925 11:30:02.307141 57752 logs.go:123] Gathering logs for kube-controller-manager [f5a2c4593b48] ...
I0925 11:30:02.307154 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5a2c4593b48"
I0925 11:30:02.347993 57752 logs.go:123] Gathering logs for storage-provisioner [a296191b186b] ...
I0925 11:30:02.348022 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a296191b186b"
I0925 11:30:02.370841 57752 logs.go:123] Gathering logs for kubernetes-dashboard [146977376d21] ...
I0925 11:30:02.370875 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 146977376d21"
I0925 11:30:02.396931 57752 logs.go:123] Gathering logs for etcd [771fdc2d4d72] ...
I0925 11:30:02.396954 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 771fdc2d4d72"
I0925 11:30:02.438996 57752 logs.go:123] Gathering logs for kube-scheduler [dd7534763296] ...
I0925 11:30:02.439025 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd7534763296"
I0925 11:30:02.464589 57752 logs.go:123] Gathering logs for kube-proxy [ba51b7a85dfa] ...
I0925 11:30:02.464621 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba51b7a85dfa"
I0925 11:30:02.492060 57752 logs.go:123] Gathering logs for Docker ...
I0925 11:30:02.492087 57752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0925 11:30:02.558928 57752 logs.go:123] Gathering logs for container status ...
I0925 11:30:02.558959 57752 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0925 11:30:02.654217 57752 logs.go:123] Gathering logs for dmesg ...
I0925 11:30:02.654246 57752 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0925 11:30:02.669423 57752 logs.go:123] Gathering logs for describe nodes ...
I0925 11:30:02.669453 57752 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0925 11:30:02.802934 57752 logs.go:123] Gathering logs for coredns [f4f7d2a397a7] ...
I0925 11:30:02.802959 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4f7d2a397a7"
I0925 11:30:02.835624 57752 logs.go:123] Gathering logs for coredns [19c28e83f034] ...
I0925 11:30:02.835649 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19c28e83f034"
I0925 11:30:02.866826 57752 logs.go:123] Gathering logs for kube-scheduler [0e6944ef9ef1] ...
I0925 11:30:02.866849 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6944ef9ef1"
I0925 11:30:02.898744 57752 logs.go:123] Gathering logs for kube-apiserver [ae812308b161] ...
I0925 11:30:02.898775 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae812308b161"
I0925 11:30:02.934534 57752 logs.go:123] Gathering logs for kube-apiserver [50dd56b362e6] ...
I0925 11:30:02.934567 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dd56b362e6"
I0925 11:30:02.972310 57752 logs.go:123] Gathering logs for etcd [f056fda5e129] ...
I0925 11:30:02.972337 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f056fda5e129"
I0925 11:30:03.005474 57752 logs.go:123] Gathering logs for storage-provisioner [e152c53b10e3] ...
I0925 11:30:03.005499 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e152c53b10e3"
I0925 11:30:03.027346 57752 logs.go:123] Gathering logs for kubelet ...
I0925 11:30:03.027368 57752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0925 11:30:03.099823 57752 logs.go:123] Gathering logs for kube-proxy [c3c77640a284] ...
I0925 11:30:03.099857 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3c77640a284"
I0925 11:30:03.124682 57752 logs.go:123] Gathering logs for kube-controller-manager [2b682a364274] ...
I0925 11:30:03.124717 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b682a364274"
I0925 11:30:01.674624 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:04.179180 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:05.991507 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:08.492917 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:05.129924 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:07.630929 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:05.663871 57752 api_server.go:253] Checking apiserver healthz at https://192.168.72.162:8443/healthz ...
I0925 11:30:05.669416 57752 api_server.go:279] https://192.168.72.162:8443/healthz returned 200:
ok
I0925 11:30:05.670783 57752 api_server.go:141] control plane version: v1.28.2
I0925 11:30:05.670809 57752 api_server.go:131] duration metric: took 3.564170226s to wait for apiserver health ...
I0925 11:30:05.670819 57752 system_pods.go:43] waiting for kube-system pods to appear ...
I0925 11:30:05.670872 57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0925 11:30:05.693324 57752 logs.go:284] 2 containers: [ae812308b161 50dd56b362e6]
I0925 11:30:05.693399 57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0925 11:30:05.717998 57752 logs.go:284] 2 containers: [f056fda5e129 771fdc2d4d72]
I0925 11:30:05.718069 57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0925 11:30:05.742708 57752 logs.go:284] 2 containers: [f4f7d2a397a7 19c28e83f034]
I0925 11:30:05.742793 57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0925 11:30:05.764298 57752 logs.go:284] 2 containers: [dd7534763296 0e6944ef9ef1]
I0925 11:30:05.764374 57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0925 11:30:05.785970 57752 logs.go:284] 2 containers: [ba51b7a85dfa c3c77640a284]
I0925 11:30:05.786039 57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0925 11:30:05.806950 57752 logs.go:284] 2 containers: [f5a2c4593b48 2b682a364274]
I0925 11:30:05.807037 57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0925 11:30:05.826462 57752 logs.go:284] 0 containers: []
W0925 11:30:05.826487 57752 logs.go:286] No container was found matching "kindnet"
I0925 11:30:05.826540 57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0925 11:30:05.845927 57752 logs.go:284] 1 containers: [146977376d21]
I0925 11:30:05.845997 57752 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0925 11:30:05.868573 57752 logs.go:284] 2 containers: [a296191b186b e152c53b10e3]
I0925 11:30:05.868615 57752 logs.go:123] Gathering logs for kube-scheduler [0e6944ef9ef1] ...
I0925 11:30:05.868629 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e6944ef9ef1"
I0925 11:30:05.909242 57752 logs.go:123] Gathering logs for kube-controller-manager [f5a2c4593b48] ...
I0925 11:30:05.909270 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5a2c4593b48"
I0925 11:30:05.959647 57752 logs.go:123] Gathering logs for storage-provisioner [e152c53b10e3] ...
I0925 11:30:05.959680 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e152c53b10e3"
I0925 11:30:05.988448 57752 logs.go:123] Gathering logs for kubelet ...
I0925 11:30:05.988480 57752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0925 11:30:06.067394 57752 logs.go:123] Gathering logs for dmesg ...
I0925 11:30:06.067429 57752 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0925 11:30:06.084943 57752 logs.go:123] Gathering logs for describe nodes ...
I0925 11:30:06.084971 57752 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0925 11:30:06.238324 57752 logs.go:123] Gathering logs for etcd [f056fda5e129] ...
I0925 11:30:06.238357 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f056fda5e129"
I0925 11:30:06.273373 57752 logs.go:123] Gathering logs for coredns [f4f7d2a397a7] ...
I0925 11:30:06.273403 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4f7d2a397a7"
I0925 11:30:06.303181 57752 logs.go:123] Gathering logs for Docker ...
I0925 11:30:06.303211 57752 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0925 11:30:06.365354 57752 logs.go:123] Gathering logs for coredns [19c28e83f034] ...
I0925 11:30:06.365398 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19c28e83f034"
I0925 11:30:06.391962 57752 logs.go:123] Gathering logs for kube-scheduler [dd7534763296] ...
I0925 11:30:06.391989 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd7534763296"
I0925 11:30:06.415389 57752 logs.go:123] Gathering logs for kube-proxy [c3c77640a284] ...
I0925 11:30:06.415412 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3c77640a284"
I0925 11:30:06.441786 57752 logs.go:123] Gathering logs for kube-controller-manager [2b682a364274] ...
I0925 11:30:06.441809 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b682a364274"
I0925 11:30:06.479862 57752 logs.go:123] Gathering logs for kubernetes-dashboard [146977376d21] ...
I0925 11:30:06.479892 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 146977376d21"
I0925 11:30:06.507143 57752 logs.go:123] Gathering logs for kube-apiserver [50dd56b362e6] ...
I0925 11:30:06.507186 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50dd56b362e6"
I0925 11:30:06.546486 57752 logs.go:123] Gathering logs for etcd [771fdc2d4d72] ...
I0925 11:30:06.546514 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 771fdc2d4d72"
I0925 11:30:06.591229 57752 logs.go:123] Gathering logs for kube-proxy [ba51b7a85dfa] ...
I0925 11:30:06.591258 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba51b7a85dfa"
I0925 11:30:06.616844 57752 logs.go:123] Gathering logs for container status ...
I0925 11:30:06.616869 57752 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0925 11:30:06.705576 57752 logs.go:123] Gathering logs for kube-apiserver [ae812308b161] ...
I0925 11:30:06.705606 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae812308b161"
I0925 11:30:06.742505 57752 logs.go:123] Gathering logs for storage-provisioner [a296191b186b] ...
I0925 11:30:06.742533 57752 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a296191b186b"
I0925 11:30:09.274341 57752 system_pods.go:59] 8 kube-system pods found
I0925 11:30:09.274368 57752 system_pods.go:61] "coredns-5dd5756b68-6786d" [e86c1a30-32f4-4518-9225-a6e735760871] Running
I0925 11:30:09.274373 57752 system_pods.go:61] "etcd-no-preload-863905" [1af0b15d-6fff-41af-a97e-dc18bba9480f] Running
I0925 11:30:09.274378 57752 system_pods.go:61] "kube-apiserver-no-preload-863905" [f7b1ffbf-13ef-4e05-9e71-87d03330cbf8] Running
I0925 11:30:09.274383 57752 system_pods.go:61] "kube-controller-manager-no-preload-863905" [0fdd6d61-d653-4555-8333-e8275502c7b2] Running
I0925 11:30:09.274386 57752 system_pods.go:61] "kube-proxy-g9dff" [db292442-0bc8-4d3f-b34f-c0142915ca47] Running
I0925 11:30:09.274390 57752 system_pods.go:61] "kube-scheduler-no-preload-863905" [e832de51-a864-49ac-9919-9a02b16a029b] Running
I0925 11:30:09.274397 57752 system_pods.go:61] "metrics-server-57f55c9bc5-p2tvr" [fc088a2c-3867-410d-b513-29e872f8156e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:30:09.274402 57752 system_pods.go:61] "storage-provisioner" [13df307c-c76e-4abd-bd03-165b04163d3a] Running
I0925 11:30:09.274408 57752 system_pods.go:74] duration metric: took 3.603584218s to wait for pod list to return data ...
I0925 11:30:09.274414 57752 default_sa.go:34] waiting for default service account to be created ...
I0925 11:30:09.276929 57752 default_sa.go:45] found service account: "default"
I0925 11:30:09.276948 57752 default_sa.go:55] duration metric: took 2.5282ms for default service account to be created ...
I0925 11:30:09.276954 57752 system_pods.go:116] waiting for k8s-apps to be running ...
I0925 11:30:09.282656 57752 system_pods.go:86] 8 kube-system pods found
I0925 11:30:09.282684 57752 system_pods.go:89] "coredns-5dd5756b68-6786d" [e86c1a30-32f4-4518-9225-a6e735760871] Running
I0925 11:30:09.282690 57752 system_pods.go:89] "etcd-no-preload-863905" [1af0b15d-6fff-41af-a97e-dc18bba9480f] Running
I0925 11:30:09.282694 57752 system_pods.go:89] "kube-apiserver-no-preload-863905" [f7b1ffbf-13ef-4e05-9e71-87d03330cbf8] Running
I0925 11:30:09.282699 57752 system_pods.go:89] "kube-controller-manager-no-preload-863905" [0fdd6d61-d653-4555-8333-e8275502c7b2] Running
I0925 11:30:09.282702 57752 system_pods.go:89] "kube-proxy-g9dff" [db292442-0bc8-4d3f-b34f-c0142915ca47] Running
I0925 11:30:09.282706 57752 system_pods.go:89] "kube-scheduler-no-preload-863905" [e832de51-a864-49ac-9919-9a02b16a029b] Running
I0925 11:30:09.282712 57752 system_pods.go:89] "metrics-server-57f55c9bc5-p2tvr" [fc088a2c-3867-410d-b513-29e872f8156e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:30:09.282721 57752 system_pods.go:89] "storage-provisioner" [13df307c-c76e-4abd-bd03-165b04163d3a] Running
I0925 11:30:09.282728 57752 system_pods.go:126] duration metric: took 5.769715ms to wait for k8s-apps to be running ...
I0925 11:30:09.282734 57752 system_svc.go:44] waiting for kubelet service to be running ....
I0925 11:30:09.282774 57752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0925 11:30:09.296447 57752 system_svc.go:56] duration metric: took 13.70254ms WaitForService to wait for kubelet.
I0925 11:30:09.296472 57752 kubeadm.go:581] duration metric: took 4m21.259281902s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0925 11:30:09.296496 57752 node_conditions.go:102] verifying NodePressure condition ...
I0925 11:30:09.300312 57752 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0925 11:30:09.300337 57752 node_conditions.go:123] node cpu capacity is 2
I0925 11:30:09.300350 57752 node_conditions.go:105] duration metric: took 3.848191ms to run NodePressure ...
I0925 11:30:09.300362 57752 start.go:228] waiting for startup goroutines ...
I0925 11:30:09.300371 57752 start.go:233] waiting for cluster config update ...
I0925 11:30:09.300384 57752 start.go:242] writing updated cluster config ...
I0925 11:30:09.300719 57752 ssh_runner.go:195] Run: rm -f paused
I0925 11:30:09.350285 57752 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
I0925 11:30:09.353257 57752 out.go:177] * Done! kubectl is now configured to use "no-preload-863905" cluster and "default" namespace by default
I0925 11:30:06.676262 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:09.174330 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:10.992813 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:13.490354 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:09.636520 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:12.129471 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:11.175516 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:13.673816 57426 pod_ready.go:102] pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:14.366919 57426 pod_ready.go:81] duration metric: took 4m0.00014225s waiting for pod "metrics-server-74d5856cc6-mknft" in "kube-system" namespace to be "Ready" ...
E0925 11:30:14.366953 57426 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
I0925 11:30:14.366991 57426 pod_ready.go:38] duration metric: took 4m1.195639658s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0925 11:30:14.367015 57426 kubeadm.go:640] restartCluster took 5m2.405916758s
W0925 11:30:14.367083 57426 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
I0925 11:30:14.367112 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0925 11:30:15.494599 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:17.993167 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:14.130508 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:16.132437 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:18.631163 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:17.424908 57426 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (3.057768249s)
I0925 11:30:17.424975 57426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0925 11:30:17.439514 57426 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0925 11:30:17.449686 57426 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0925 11:30:17.460096 57426 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0925 11:30:17.460147 57426 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
I0925 11:30:17.622252 57426 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0925 11:30:17.662261 57426 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
I0925 11:30:17.759764 57426 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0925 11:30:20.493076 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:22.995449 57927 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:21.130370 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:23.137540 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:24.792048 57927 pod_ready.go:81] duration metric: took 4m0.000079144s waiting for pod "metrics-server-57f55c9bc5-wcdlv" in "kube-system" namespace to be "Ready" ...
E0925 11:30:24.792097 57927 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
I0925 11:30:24.792110 57927 pod_ready.go:38] duration metric: took 4m9.506946432s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0925 11:30:24.792141 57927 api_server.go:52] waiting for apiserver process to appear ...
I0925 11:30:24.792215 57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0925 11:30:24.824238 57927 logs.go:284] 2 containers: [8b9c731d3b7e d7bd5b496cbd]
I0925 11:30:24.824320 57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0925 11:30:24.843686 57927 logs.go:284] 2 containers: [398bd2a5d8a1 5885667a7efa]
I0925 11:30:24.843764 57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0925 11:30:24.868292 57927 logs.go:284] 2 containers: [f04ac298d08a 7603adb1cbbb]
I0925 11:30:24.868377 57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0925 11:30:24.892540 57927 logs.go:284] 2 containers: [3815d034e8cc fb845c120fcf]
I0925 11:30:24.892617 57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0925 11:30:24.919019 57927 logs.go:284] 2 containers: [3061d1aa366b 30075b5efc6f]
I0925 11:30:24.919091 57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0925 11:30:24.946855 57927 logs.go:284] 2 containers: [b75d214e650c 1e96b0e25a6d]
I0925 11:30:24.946930 57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0925 11:30:24.989142 57927 logs.go:284] 0 containers: []
W0925 11:30:24.989168 57927 logs.go:286] No container was found matching "kindnet"
I0925 11:30:24.989220 57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0925 11:30:25.011261 57927 logs.go:284] 2 containers: [0f7378f7cd7f b9d2c22b02cb]
I0925 11:30:25.011345 57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0925 11:30:25.030950 57927 logs.go:284] 1 containers: [f3cb7eacbd5f]
I0925 11:30:25.030977 57927 logs.go:123] Gathering logs for kubelet ...
I0925 11:30:25.030989 57927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0925 11:30:25.120210 57927 logs.go:123] Gathering logs for etcd [398bd2a5d8a1] ...
I0925 11:30:25.120239 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 398bd2a5d8a1"
I0925 11:30:25.152215 57927 logs.go:123] Gathering logs for etcd [5885667a7efa] ...
I0925 11:30:25.152243 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5885667a7efa"
I0925 11:30:25.194959 57927 logs.go:123] Gathering logs for kube-scheduler [3815d034e8cc] ...
I0925 11:30:25.194997 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3815d034e8cc"
I0925 11:30:25.229067 57927 logs.go:123] Gathering logs for kube-proxy [3061d1aa366b] ...
I0925 11:30:25.229094 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3061d1aa366b"
I0925 11:30:25.256359 57927 logs.go:123] Gathering logs for kubernetes-dashboard [f3cb7eacbd5f] ...
I0925 11:30:25.256386 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3cb7eacbd5f"
I0925 11:30:25.280428 57927 logs.go:123] Gathering logs for kube-apiserver [d7bd5b496cbd] ...
I0925 11:30:25.280459 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7bd5b496cbd"
I0925 11:30:25.330876 57927 logs.go:123] Gathering logs for coredns [f04ac298d08a] ...
I0925 11:30:25.330902 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04ac298d08a"
I0925 11:30:25.353121 57927 logs.go:123] Gathering logs for coredns [7603adb1cbbb] ...
I0925 11:30:25.353148 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7603adb1cbbb"
I0925 11:30:25.375127 57927 logs.go:123] Gathering logs for storage-provisioner [0f7378f7cd7f] ...
I0925 11:30:25.375154 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f7378f7cd7f"
I0925 11:30:25.402664 57927 logs.go:123] Gathering logs for storage-provisioner [b9d2c22b02cb] ...
I0925 11:30:25.402690 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9d2c22b02cb"
I0925 11:30:25.428214 57927 logs.go:123] Gathering logs for container status ...
I0925 11:30:25.428238 57927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0925 11:30:25.509982 57927 logs.go:123] Gathering logs for dmesg ...
I0925 11:30:25.510015 57927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0925 11:30:25.525584 57927 logs.go:123] Gathering logs for describe nodes ...
I0925 11:30:25.525623 57927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0925 11:30:25.696377 57927 logs.go:123] Gathering logs for kube-apiserver [8b9c731d3b7e] ...
I0925 11:30:25.696402 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9c731d3b7e"
I0925 11:30:25.734242 57927 logs.go:123] Gathering logs for kube-scheduler [fb845c120fcf] ...
I0925 11:30:25.734271 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb845c120fcf"
I0925 11:30:25.763410 57927 logs.go:123] Gathering logs for kube-proxy [30075b5efc6f] ...
I0925 11:30:25.763436 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30075b5efc6f"
I0925 11:30:25.797529 57927 logs.go:123] Gathering logs for kube-controller-manager [b75d214e650c] ...
I0925 11:30:25.797556 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75d214e650c"
I0925 11:30:25.843899 57927 logs.go:123] Gathering logs for kube-controller-manager [1e96b0e25a6d] ...
I0925 11:30:25.843927 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e96b0e25a6d"
I0925 11:30:25.896478 57927 logs.go:123] Gathering logs for Docker ...
I0925 11:30:25.896507 57927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0925 11:30:28.465765 57927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0925 11:30:28.480996 57927 api_server.go:72] duration metric: took 4m15.769590927s to wait for apiserver process to appear ...
I0925 11:30:28.481023 57927 api_server.go:88] waiting for apiserver healthz status ...
I0925 11:30:28.481101 57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0925 11:30:25.631323 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:28.129055 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:30.749642 57426 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
I0925 11:30:30.749742 57426 kubeadm.go:322] [preflight] Running pre-flight checks
I0925 11:30:30.749858 57426 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0925 11:30:30.749944 57426 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0925 11:30:30.750021 57426 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0925 11:30:30.750109 57426 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0925 11:30:30.750191 57426 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0925 11:30:30.750247 57426 kubeadm.go:322] [kubelet-start] Activating the kubelet service
I0925 11:30:30.750371 57426 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0925 11:30:30.751913 57426 out.go:204] - Generating certificates and keys ...
I0925 11:30:30.752003 57426 kubeadm.go:322] [certs] Using existing ca certificate authority
I0925 11:30:30.752119 57426 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0925 11:30:30.752232 57426 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0925 11:30:30.752318 57426 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
I0925 11:30:30.752414 57426 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
I0925 11:30:30.752468 57426 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
I0925 11:30:30.752570 57426 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
I0925 11:30:30.752681 57426 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
I0925 11:30:30.752781 57426 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0925 11:30:30.752890 57426 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0925 11:30:30.752940 57426 kubeadm.go:322] [certs] Using the existing "sa" key
I0925 11:30:30.753020 57426 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0925 11:30:30.753090 57426 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0925 11:30:30.753154 57426 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0925 11:30:30.753251 57426 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0925 11:30:30.753324 57426 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0925 11:30:30.753398 57426 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0925 11:30:30.755107 57426 out.go:204] - Booting up control plane ...
I0925 11:30:30.755208 57426 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0925 11:30:30.755334 57426 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0925 11:30:30.755426 57426 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0925 11:30:30.755500 57426 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0925 11:30:30.755652 57426 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0925 11:30:30.755754 57426 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.505077 seconds
I0925 11:30:30.755912 57426 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0925 11:30:30.756083 57426 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
I0925 11:30:30.756182 57426 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
I0925 11:30:30.756384 57426 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-694015 as control-plane by adding the label "node-role.kubernetes.io/master=''"
I0925 11:30:30.756471 57426 kubeadm.go:322] [bootstrap-token] Using token: snq27o.n0f9uw50v17gbayd
I0925 11:30:28.509506 57927 logs.go:284] 2 containers: [8b9c731d3b7e d7bd5b496cbd]
I0925 11:30:28.509575 57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0925 11:30:28.532621 57927 logs.go:284] 2 containers: [398bd2a5d8a1 5885667a7efa]
I0925 11:30:28.532723 57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0925 11:30:28.554799 57927 logs.go:284] 2 containers: [f04ac298d08a 7603adb1cbbb]
I0925 11:30:28.554878 57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0925 11:30:28.574977 57927 logs.go:284] 2 containers: [3815d034e8cc fb845c120fcf]
I0925 11:30:28.575048 57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0925 11:30:28.596014 57927 logs.go:284] 2 containers: [3061d1aa366b 30075b5efc6f]
I0925 11:30:28.596094 57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0925 11:30:28.616627 57927 logs.go:284] 2 containers: [b75d214e650c 1e96b0e25a6d]
I0925 11:30:28.616712 57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0925 11:30:28.636762 57927 logs.go:284] 0 containers: []
W0925 11:30:28.636782 57927 logs.go:286] No container was found matching "kindnet"
I0925 11:30:28.636838 57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0925 11:30:28.659028 57927 logs.go:284] 2 containers: [0f7378f7cd7f b9d2c22b02cb]
I0925 11:30:28.659094 57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0925 11:30:28.680689 57927 logs.go:284] 1 containers: [f3cb7eacbd5f]
I0925 11:30:28.680722 57927 logs.go:123] Gathering logs for etcd [398bd2a5d8a1] ...
I0925 11:30:28.680736 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 398bd2a5d8a1"
I0925 11:30:28.714051 57927 logs.go:123] Gathering logs for etcd [5885667a7efa] ...
I0925 11:30:28.714078 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5885667a7efa"
I0925 11:30:28.762170 57927 logs.go:123] Gathering logs for kubernetes-dashboard [f3cb7eacbd5f] ...
I0925 11:30:28.762204 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3cb7eacbd5f"
I0925 11:30:28.788343 57927 logs.go:123] Gathering logs for container status ...
I0925 11:30:28.788371 57927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0925 11:30:28.869517 57927 logs.go:123] Gathering logs for describe nodes ...
I0925 11:30:28.869548 57927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0925 11:30:29.002897 57927 logs.go:123] Gathering logs for coredns [7603adb1cbbb] ...
I0925 11:30:29.002920 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7603adb1cbbb"
I0925 11:30:29.032416 57927 logs.go:123] Gathering logs for kube-scheduler [3815d034e8cc] ...
I0925 11:30:29.032444 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3815d034e8cc"
I0925 11:30:29.063893 57927 logs.go:123] Gathering logs for kube-proxy [3061d1aa366b] ...
I0925 11:30:29.063921 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3061d1aa366b"
I0925 11:30:29.089890 57927 logs.go:123] Gathering logs for kube-controller-manager [b75d214e650c] ...
I0925 11:30:29.089916 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75d214e650c"
I0925 11:30:29.132797 57927 logs.go:123] Gathering logs for storage-provisioner [b9d2c22b02cb] ...
I0925 11:30:29.132827 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9d2c22b02cb"
I0925 11:30:29.155350 57927 logs.go:123] Gathering logs for Docker ...
I0925 11:30:29.155371 57927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0925 11:30:29.213418 57927 logs.go:123] Gathering logs for kube-apiserver [d7bd5b496cbd] ...
I0925 11:30:29.213447 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7bd5b496cbd"
I0925 11:30:29.254863 57927 logs.go:123] Gathering logs for coredns [f04ac298d08a] ...
I0925 11:30:29.254891 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04ac298d08a"
I0925 11:30:29.277677 57927 logs.go:123] Gathering logs for kube-scheduler [fb845c120fcf] ...
I0925 11:30:29.277709 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb845c120fcf"
I0925 11:30:29.308393 57927 logs.go:123] Gathering logs for dmesg ...
I0925 11:30:29.308422 57927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0925 11:30:29.330968 57927 logs.go:123] Gathering logs for kube-apiserver [8b9c731d3b7e] ...
I0925 11:30:29.330989 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9c731d3b7e"
I0925 11:30:29.374515 57927 logs.go:123] Gathering logs for kube-proxy [30075b5efc6f] ...
I0925 11:30:29.374542 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30075b5efc6f"
I0925 11:30:29.399946 57927 logs.go:123] Gathering logs for kube-controller-manager [1e96b0e25a6d] ...
I0925 11:30:29.399975 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e96b0e25a6d"
I0925 11:30:29.445837 57927 logs.go:123] Gathering logs for storage-provisioner [0f7378f7cd7f] ...
I0925 11:30:29.445870 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f7378f7cd7f"
I0925 11:30:29.468320 57927 logs.go:123] Gathering logs for kubelet ...
I0925 11:30:29.468346 57927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0925 11:30:32.042767 57927 api_server.go:253] Checking apiserver healthz at https://192.168.61.208:8444/healthz ...
I0925 11:30:32.048546 57927 api_server.go:279] https://192.168.61.208:8444/healthz returned 200:
ok
I0925 11:30:32.052014 57927 api_server.go:141] control plane version: v1.28.2
I0925 11:30:32.052036 57927 api_server.go:131] duration metric: took 3.571006059s to wait for apiserver health ...
I0925 11:30:32.052046 57927 system_pods.go:43] waiting for kube-system pods to appear ...
I0925 11:30:32.052108 57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0925 11:30:32.083762 57927 logs.go:284] 2 containers: [8b9c731d3b7e d7bd5b496cbd]
I0925 11:30:32.083848 57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0925 11:30:32.106317 57927 logs.go:284] 2 containers: [398bd2a5d8a1 5885667a7efa]
I0925 11:30:32.106392 57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0925 11:30:32.128245 57927 logs.go:284] 2 containers: [f04ac298d08a 7603adb1cbbb]
I0925 11:30:32.128333 57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0925 11:30:32.148973 57927 logs.go:284] 2 containers: [3815d034e8cc fb845c120fcf]
I0925 11:30:32.149052 57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0925 11:30:32.174028 57927 logs.go:284] 2 containers: [3061d1aa366b 30075b5efc6f]
I0925 11:30:32.174103 57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0925 11:30:32.196115 57927 logs.go:284] 2 containers: [b75d214e650c 1e96b0e25a6d]
I0925 11:30:32.196181 57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0925 11:30:32.216678 57927 logs.go:284] 0 containers: []
W0925 11:30:32.216702 57927 logs.go:286] No container was found matching "kindnet"
I0925 11:30:32.216757 57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0925 11:30:32.237388 57927 logs.go:284] 1 containers: [f3cb7eacbd5f]
I0925 11:30:32.237473 57927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0925 11:30:32.257088 57927 logs.go:284] 2 containers: [0f7378f7cd7f b9d2c22b02cb]
I0925 11:30:32.257112 57927 logs.go:123] Gathering logs for kubelet ...
I0925 11:30:32.257122 57927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0925 11:30:32.327894 57927 logs.go:123] Gathering logs for kube-apiserver [8b9c731d3b7e] ...
I0925 11:30:32.327929 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9c731d3b7e"
I0925 11:30:32.365128 57927 logs.go:123] Gathering logs for kube-scheduler [3815d034e8cc] ...
I0925 11:30:32.365156 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3815d034e8cc"
I0925 11:30:32.394664 57927 logs.go:123] Gathering logs for kubernetes-dashboard [f3cb7eacbd5f] ...
I0925 11:30:32.394703 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3cb7eacbd5f"
I0925 11:30:32.450709 57927 logs.go:123] Gathering logs for Docker ...
I0925 11:30:32.450737 57927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0925 11:30:32.512407 57927 logs.go:123] Gathering logs for container status ...
I0925 11:30:32.512442 57927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0925 11:30:32.602958 57927 logs.go:123] Gathering logs for kube-apiserver [d7bd5b496cbd] ...
I0925 11:30:32.602985 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7bd5b496cbd"
I0925 11:30:32.646449 57927 logs.go:123] Gathering logs for etcd [5885667a7efa] ...
I0925 11:30:32.646478 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5885667a7efa"
I0925 11:30:32.693817 57927 logs.go:123] Gathering logs for coredns [7603adb1cbbb] ...
I0925 11:30:32.693843 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7603adb1cbbb"
I0925 11:30:32.728336 57927 logs.go:123] Gathering logs for kube-proxy [3061d1aa366b] ...
I0925 11:30:32.728364 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3061d1aa366b"
I0925 11:30:32.754018 57927 logs.go:123] Gathering logs for kube-controller-manager [1e96b0e25a6d] ...
I0925 11:30:32.754053 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e96b0e25a6d"
I0925 11:30:32.791438 57927 logs.go:123] Gathering logs for storage-provisioner [0f7378f7cd7f] ...
I0925 11:30:32.791473 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f7378f7cd7f"
I0925 11:30:32.813473 57927 logs.go:123] Gathering logs for dmesg ...
I0925 11:30:32.813501 57927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0925 11:30:32.827795 57927 logs.go:123] Gathering logs for etcd [398bd2a5d8a1] ...
I0925 11:30:32.827824 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 398bd2a5d8a1"
I0925 11:30:32.862910 57927 logs.go:123] Gathering logs for kube-scheduler [fb845c120fcf] ...
I0925 11:30:32.862934 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb845c120fcf"
I0925 11:30:32.899610 57927 logs.go:123] Gathering logs for kube-controller-manager [b75d214e650c] ...
I0925 11:30:32.899642 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75d214e650c"
I0925 11:30:32.941021 57927 logs.go:123] Gathering logs for describe nodes ...
I0925 11:30:32.941056 57927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0925 11:30:33.072749 57927 logs.go:123] Gathering logs for coredns [f04ac298d08a] ...
I0925 11:30:33.072786 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f04ac298d08a"
I0925 11:30:33.105984 57927 logs.go:123] Gathering logs for kube-proxy [30075b5efc6f] ...
I0925 11:30:33.106016 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30075b5efc6f"
I0925 11:30:33.132338 57927 logs.go:123] Gathering logs for storage-provisioner [b9d2c22b02cb] ...
I0925 11:30:33.132366 57927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9d2c22b02cb"
I0925 11:30:30.629720 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:33.133383 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:30.758173 57426 out.go:204] - Configuring RBAC rules ...
I0925 11:30:30.758310 57426 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0925 11:30:30.758487 57426 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0925 11:30:30.758649 57426 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0925 11:30:30.758810 57426 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0925 11:30:30.758962 57426 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0925 11:30:30.759033 57426 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
I0925 11:30:30.759112 57426 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
I0925 11:30:30.759121 57426 kubeadm.go:322]
I0925 11:30:30.759191 57426 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
I0925 11:30:30.759205 57426 kubeadm.go:322]
I0925 11:30:30.759275 57426 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
I0925 11:30:30.759285 57426 kubeadm.go:322]
I0925 11:30:30.759329 57426 kubeadm.go:322] mkdir -p $HOME/.kube
I0925 11:30:30.759379 57426 kubeadm.go:322] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0925 11:30:30.759421 57426 kubeadm.go:322] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0925 11:30:30.759429 57426 kubeadm.go:322]
I0925 11:30:30.759483 57426 kubeadm.go:322] You should now deploy a pod network to the cluster.
I0925 11:30:30.759595 57426 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0925 11:30:30.759689 57426 kubeadm.go:322] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0925 11:30:30.759705 57426 kubeadm.go:322]
I0925 11:30:30.759821 57426 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
I0925 11:30:30.759962 57426 kubeadm.go:322] and service account keys on each node and then running the following as root:
I0925 11:30:30.759977 57426 kubeadm.go:322]
I0925 11:30:30.760084 57426 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token snq27o.n0f9uw50v17gbayd \
I0925 11:30:30.760216 57426 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:455a6e1c2932646abd648adc4fff0ce596b942d8b3bd098a2ef2cb3ea084ab54 \
I0925 11:30:30.760255 57426 kubeadm.go:322] --control-plane
I0925 11:30:30.760264 57426 kubeadm.go:322]
I0925 11:30:30.760361 57426 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
I0925 11:30:30.760370 57426 kubeadm.go:322]
I0925 11:30:30.760469 57426 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token snq27o.n0f9uw50v17gbayd \
I0925 11:30:30.760617 57426 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:455a6e1c2932646abd648adc4fff0ce596b942d8b3bd098a2ef2cb3ea084ab54
I0925 11:30:30.760630 57426 cni.go:84] Creating CNI manager for ""
I0925 11:30:30.760655 57426 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I0925 11:30:30.760693 57426 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0925 11:30:30.760827 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:30.760899 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=1bf6c3d5317028f348e55ea19d261973a6487d3c minikube.k8s.io/name=old-k8s-version-694015 minikube.k8s.io/updated_at=2023_09_25T11_30_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:30.820984 57426 ops.go:34] apiserver oom_adj: -16
I0925 11:30:31.034555 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:31.165894 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:31.768765 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:32.269393 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:32.768687 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:33.269126 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:33.768794 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:34.269149 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:34.769469 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:35.268685 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:35.664427 57927 system_pods.go:59] 8 kube-system pods found
I0925 11:30:35.664451 57927 system_pods.go:61] "coredns-5dd5756b68-lp744" [67024c7b-a800-4c05-80f8-ad56b637d721] Running
I0925 11:30:35.664456 57927 system_pods.go:61] "etcd-default-k8s-diff-port-319133" [bc48a820-15fc-46c3-be99-4842fec268b5] Running
I0925 11:30:35.664461 57927 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-319133" [04c9e550-fac9-4b14-a53f-f49a8d28f3aa] Running
I0925 11:30:35.664466 57927 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-319133" [84d26a48-a3cb-480e-818a-04e47c47a04a] Running
I0925 11:30:35.664473 57927 system_pods.go:61] "kube-proxy-p4dnh" [8d162f05-34ef-431b-ac18-fc0ea1f48a5a] Running
I0925 11:30:35.664479 57927 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-319133" [d66d0709-f0f0-482b-88fc-cbf209c895fd] Running
I0925 11:30:35.664489 57927 system_pods.go:61] "metrics-server-57f55c9bc5-wcdlv" [66045763-8356-4769-930f-a82fc160d7fd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:30:35.664507 57927 system_pods.go:61] "storage-provisioner" [eaa8bad6-4a31-4429-98ff-099273d7184f] Running
I0925 11:30:35.664518 57927 system_pods.go:74] duration metric: took 3.612465435s to wait for pod list to return data ...
I0925 11:30:35.664526 57927 default_sa.go:34] waiting for default service account to be created ...
I0925 11:30:35.669238 57927 default_sa.go:45] found service account: "default"
I0925 11:30:35.669258 57927 default_sa.go:55] duration metric: took 4.728219ms for default service account to be created ...
I0925 11:30:35.669266 57927 system_pods.go:116] waiting for k8s-apps to be running ...
I0925 11:30:35.677224 57927 system_pods.go:86] 8 kube-system pods found
I0925 11:30:35.677248 57927 system_pods.go:89] "coredns-5dd5756b68-lp744" [67024c7b-a800-4c05-80f8-ad56b637d721] Running
I0925 11:30:35.677254 57927 system_pods.go:89] "etcd-default-k8s-diff-port-319133" [bc48a820-15fc-46c3-be99-4842fec268b5] Running
I0925 11:30:35.677260 57927 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-319133" [04c9e550-fac9-4b14-a53f-f49a8d28f3aa] Running
I0925 11:30:35.677265 57927 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-319133" [84d26a48-a3cb-480e-818a-04e47c47a04a] Running
I0925 11:30:35.677269 57927 system_pods.go:89] "kube-proxy-p4dnh" [8d162f05-34ef-431b-ac18-fc0ea1f48a5a] Running
I0925 11:30:35.677273 57927 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-319133" [d66d0709-f0f0-482b-88fc-cbf209c895fd] Running
I0925 11:30:35.677279 57927 system_pods.go:89] "metrics-server-57f55c9bc5-wcdlv" [66045763-8356-4769-930f-a82fc160d7fd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:30:35.677285 57927 system_pods.go:89] "storage-provisioner" [eaa8bad6-4a31-4429-98ff-099273d7184f] Running
I0925 11:30:35.677291 57927 system_pods.go:126] duration metric: took 8.021227ms to wait for k8s-apps to be running ...
I0925 11:30:35.677301 57927 system_svc.go:44] waiting for kubelet service to be running ....
I0925 11:30:35.677340 57927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0925 11:30:35.696637 57927 system_svc.go:56] duration metric: took 19.327902ms WaitForService to wait for kubelet.
I0925 11:30:35.696659 57927 kubeadm.go:581] duration metric: took 4m22.985262397s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0925 11:30:35.696712 57927 node_conditions.go:102] verifying NodePressure condition ...
I0925 11:30:35.701675 57927 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0925 11:30:35.701709 57927 node_conditions.go:123] node cpu capacity is 2
I0925 11:30:35.701719 57927 node_conditions.go:105] duration metric: took 4.999654ms to run NodePressure ...
I0925 11:30:35.701730 57927 start.go:228] waiting for startup goroutines ...
I0925 11:30:35.701737 57927 start.go:233] waiting for cluster config update ...
I0925 11:30:35.701749 57927 start.go:242] writing updated cluster config ...
I0925 11:30:35.702076 57927 ssh_runner.go:195] Run: rm -f paused
I0925 11:30:35.751111 57927 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
I0925 11:30:35.753033 57927 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-319133" cluster and "default" namespace by default
I0925 11:30:35.134183 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:37.629084 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:35.769384 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:36.269510 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:36.768848 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:37.268799 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:37.769259 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:38.269444 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:38.769081 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:39.269471 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:39.768795 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:40.269215 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:39.631655 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:42.128083 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:40.768992 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:41.269161 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:41.768782 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:42.269438 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:42.769149 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:43.268490 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:43.768911 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:44.269363 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:44.769428 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:45.268548 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:45.769489 57426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:30:46.046613 57426 kubeadm.go:1081] duration metric: took 15.285826285s to wait for elevateKubeSystemPrivileges.
I0925 11:30:46.046655 57426 kubeadm.go:406] StartCluster complete in 5m34.119546847s
I0925 11:30:46.046676 57426 settings.go:142] acquiring lock: {Name:mk372f3d0f6e5777ebfc48341e146821f27f636c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0925 11:30:46.046764 57426 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/17297-6032/kubeconfig
I0925 11:30:46.048206 57426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-6032/kubeconfig: {Name:mk2e6cdf75b548522ce59dabb15b91a1d0336907 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0925 11:30:46.048432 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0925 11:30:46.048574 57426 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
I0925 11:30:46.048644 57426 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-694015"
I0925 11:30:46.048653 57426 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-694015"
I0925 11:30:46.048678 57426 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-694015"
I0925 11:30:46.048687 57426 addons.go:69] Setting dashboard=true in profile "old-k8s-version-694015"
W0925 11:30:46.048690 57426 addons.go:240] addon storage-provisioner should already be in state true
I0925 11:30:46.048698 57426 addons.go:231] Setting addon dashboard=true in "old-k8s-version-694015"
W0925 11:30:46.048709 57426 addons.go:240] addon dashboard should already be in state true
I0925 11:30:46.048720 57426 config.go:182] Loaded profile config "old-k8s-version-694015": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
I0925 11:30:46.048735 57426 host.go:66] Checking if "old-k8s-version-694015" exists ...
I0925 11:30:46.048744 57426 host.go:66] Checking if "old-k8s-version-694015" exists ...
I0925 11:30:46.048818 57426 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-694015"
I0925 11:30:46.048847 57426 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-694015"
W0925 11:30:46.048855 57426 addons.go:240] addon metrics-server should already be in state true
I0925 11:30:46.048680 57426 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-694015"
I0925 11:30:46.048796 57426 cache.go:107] acquiring lock: {Name:mk67fca357e44d730577a3f111223198f60ef976 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0925 11:30:46.048888 57426 host.go:66] Checking if "old-k8s-version-694015" exists ...
I0925 11:30:46.048935 57426 cache.go:115] /home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
I0925 11:30:46.048944 57426 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 153.391µs
I0925 11:30:46.048955 57426 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
I0925 11:30:46.048963 57426 cache.go:87] Successfully saved all images to host disk.
I0925 11:30:46.049135 57426 config.go:182] Loaded profile config "old-k8s-version-694015": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
I0925 11:30:46.049144 57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:30:46.049162 57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:30:46.049168 57426 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:30:46.049183 57426 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:30:46.049247 57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:30:46.049260 57426 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:30:46.049320 57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:30:46.049333 57426 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:30:46.049505 57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:30:46.049555 57426 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:30:46.072180 57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34421
I0925 11:30:46.072238 57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40457
I0925 11:30:46.072269 57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46441
I0925 11:30:46.072356 57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39891
I0925 11:30:46.072357 57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37241
I0925 11:30:46.072696 57426 main.go:141] libmachine: () Calling .GetVersion
I0925 11:30:46.072776 57426 main.go:141] libmachine: () Calling .GetVersion
I0925 11:30:46.072860 57426 main.go:141] libmachine: () Calling .GetVersion
I0925 11:30:46.073344 57426 main.go:141] libmachine: Using API Version 1
I0925 11:30:46.073364 57426 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:30:46.073496 57426 main.go:141] libmachine: Using API Version 1
I0925 11:30:46.073509 57426 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:30:46.073509 57426 main.go:141] libmachine: () Calling .GetVersion
I0925 11:30:46.073756 57426 main.go:141] libmachine: () Calling .GetVersion
I0925 11:30:46.073762 57426 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:30:46.073964 57426 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:30:46.074195 57426 main.go:141] libmachine: Using API Version 1
I0925 11:30:46.074210 57426 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:30:46.074253 57426 main.go:141] libmachine: Using API Version 1
I0925 11:30:46.074286 57426 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:30:46.074439 57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:30:46.074467 57426 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:30:46.074610 57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:30:46.074656 57426 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:30:46.074686 57426 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:30:46.074715 57426 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:30:46.074930 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetState
I0925 11:30:46.075069 57426 main.go:141] libmachine: Using API Version 1
I0925 11:30:46.075101 57426 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:30:46.075234 57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:30:46.075269 57426 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:30:46.075582 57426 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:30:46.075811 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetState
I0925 11:30:46.077659 57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:30:46.077697 57426 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:30:46.094611 57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44233
I0925 11:30:46.097022 57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44765
I0925 11:30:46.097145 57426 main.go:141] libmachine: () Calling .GetVersion
I0925 11:30:46.097460 57426 main.go:141] libmachine: () Calling .GetVersion
I0925 11:30:46.097748 57426 main.go:141] libmachine: Using API Version 1
I0925 11:30:46.097767 57426 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:30:46.098172 57426 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:30:46.098314 57426 main.go:141] libmachine: Using API Version 1
I0925 11:30:46.098333 57426 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:30:46.098564 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetState
I0925 11:30:46.098618 57426 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:30:46.099229 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetState
I0925 11:30:46.101256 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
I0925 11:30:46.103863 57426 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0925 11:30:46.102124 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
I0925 11:30:46.102436 57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43189
I0925 11:30:46.106576 57426 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0925 11:30:46.105560 57426 main.go:141] libmachine: () Calling .GetVersion
I0925 11:30:46.109500 57426 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0925 11:30:46.108220 57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0925 11:30:46.108845 57426 main.go:141] libmachine: Using API Version 1
I0925 11:30:46.110913 57426 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:30:46.110969 57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0925 11:30:46.110985 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
I0925 11:30:46.110999 57426 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0925 11:30:46.111011 57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0925 11:30:46.111024 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
I0925 11:30:46.112450 57426 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:30:46.112637 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
I0925 11:30:46.112839 57426 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0925 11:30:46.112862 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
I0925 11:30:46.115509 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:30:46.115949 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
I0925 11:30:46.115983 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:30:46.116123 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
I0925 11:30:46.116214 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:30:46.116253 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
I0925 11:30:46.116342 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
I0925 11:30:46.116466 57426 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa Username:docker}
I0925 11:30:46.116484 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
I0925 11:30:46.116508 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:30:46.116774 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
I0925 11:30:46.116925 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
I0925 11:30:46.117104 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
I0925 11:30:46.117252 57426 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa Username:docker}
I0925 11:30:46.119073 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:30:46.119413 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
I0925 11:30:46.119430 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:30:46.119685 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
I0925 11:30:46.119854 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
I0925 11:30:46.120011 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
I0925 11:30:46.120148 57426 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa Username:docker}
I0925 11:30:46.127174 57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37771
I0925 11:30:46.127843 57426 main.go:141] libmachine: () Calling .GetVersion
I0925 11:30:46.128399 57426 main.go:141] libmachine: Using API Version 1
I0925 11:30:46.128428 57426 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:30:46.128967 57426 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:30:46.129155 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetState
I0925 11:30:46.129945 57426 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-694015" context rescaled to 1 replicas
I0925 11:30:46.129977 57426 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.17 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I0925 11:30:46.131741 57426 out.go:177] * Verifying Kubernetes components...
I0925 11:30:46.133087 57426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0925 11:30:46.130848 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
I0925 11:30:46.134728 57426 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0925 11:30:44.129372 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:46.133247 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:48.630362 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:46.136080 57426 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0925 11:30:46.136097 57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0925 11:30:46.136115 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
I0925 11:30:46.139231 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:30:46.139692 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
I0925 11:30:46.139718 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:30:46.139957 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
I0925 11:30:46.140113 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
I0925 11:30:46.140252 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
I0925 11:30:46.140377 57426 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa Username:docker}
I0925 11:30:46.147885 57426 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-694015"
W0925 11:30:46.147907 57426 addons.go:240] addon default-storageclass should already be in state true
I0925 11:30:46.147934 57426 host.go:66] Checking if "old-k8s-version-694015" exists ...
I0925 11:30:46.148356 57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:30:46.148384 57426 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:30:46.173474 57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43085
I0925 11:30:46.174243 57426 main.go:141] libmachine: () Calling .GetVersion
I0925 11:30:46.174879 57426 main.go:141] libmachine: Using API Version 1
I0925 11:30:46.174900 57426 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:30:46.176033 57426 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:30:46.176694 57426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:30:46.176736 57426 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:30:46.196631 57426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43971
I0925 11:30:46.197107 57426 main.go:141] libmachine: () Calling .GetVersion
I0925 11:30:46.197645 57426 main.go:141] libmachine: Using API Version 1
I0925 11:30:46.197665 57426 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:30:46.198067 57426 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:30:46.198270 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetState
I0925 11:30:46.200093 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .DriverName
I0925 11:30:46.200354 57426 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
I0925 11:30:46.200371 57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0925 11:30:46.200390 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHHostname
I0925 11:30:46.203486 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:30:46.203884 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:28:7c", ip: ""} in network mk-old-k8s-version-694015: {Iface:virbr2 ExpiryTime:2023-09-25 12:24:54 +0000 UTC Type:0 Mac:52:54:00:e6:28:7c Iaid: IPaddr:192.168.50.17 Prefix:24 Hostname:old-k8s-version-694015 Clientid:01:52:54:00:e6:28:7c}
I0925 11:30:46.203998 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | domain old-k8s-version-694015 has defined IP address 192.168.50.17 and MAC address 52:54:00:e6:28:7c in network mk-old-k8s-version-694015
I0925 11:30:46.204172 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHPort
I0925 11:30:46.204342 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHKeyPath
I0925 11:30:46.204489 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .GetSSHUsername
I0925 11:30:46.204636 57426 sshutil.go:53] new ssh client: &{IP:192.168.50.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/old-k8s-version-694015/id_rsa Username:docker}
I0925 11:30:46.413931 57426 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-694015" to be "Ready" ...
I0925 11:30:46.414008 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.50.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0925 11:30:46.416569 57426 node_ready.go:49] node "old-k8s-version-694015" has status "Ready":"True"
I0925 11:30:46.416586 57426 node_ready.go:38] duration metric: took 2.626333ms waiting for node "old-k8s-version-694015" to be "Ready" ...
I0925 11:30:46.416594 57426 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0925 11:30:46.420795 57426 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace to be "Ready" ...
I0925 11:30:46.484507 57426 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0925 11:30:46.484532 57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0925 11:30:46.532417 57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0925 11:30:46.532443 57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0925 11:30:46.575299 57426 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0925 11:30:46.575317 57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0925 11:30:46.595994 57426 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0925 11:30:46.596018 57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0925 11:30:46.652448 57426 docker.go:664] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/kube-apiserver:v1.16.0
k8s.gcr.io/kube-proxy:v1.16.0
k8s.gcr.io/kube-controller-manager:v1.16.0
k8s.gcr.io/kube-scheduler:v1.16.0
k8s.gcr.io/etcd:3.3.15-0
k8s.gcr.io/coredns:1.6.2
gcr.io/k8s-minikube/busybox:1.28.4-glibc
registry.k8s.io/pause:3.1
k8s.gcr.io/pause:3.1
-- /stdout --
I0925 11:30:46.652473 57426 cache_images.go:84] Images are preloaded, skipping loading
I0925 11:30:46.652480 57426 cache_images.go:262] succeeded pushing to: old-k8s-version-694015
I0925 11:30:46.652483 57426 cache_images.go:263] failed pushing to:
I0925 11:30:46.652504 57426 main.go:141] libmachine: Making call to close driver server
I0925 11:30:46.652518 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
I0925 11:30:46.652957 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
I0925 11:30:46.652963 57426 main.go:141] libmachine: Successfully made call to close driver server
I0925 11:30:46.652991 57426 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 11:30:46.653007 57426 main.go:141] libmachine: Making call to close driver server
I0925 11:30:46.653020 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
I0925 11:30:46.653288 57426 main.go:141] libmachine: Successfully made call to close driver server
I0925 11:30:46.653304 57426 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 11:30:46.705521 57426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0925 11:30:46.707099 57426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0925 11:30:46.712115 57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0925 11:30:46.712134 57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0925 11:30:46.762833 57426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0925 11:30:46.851711 57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0925 11:30:46.851753 57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0925 11:30:47.115165 57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0925 11:30:47.115193 57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0925 11:30:47.386363 57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
I0925 11:30:47.386386 57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0925 11:30:47.610468 57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0925 11:30:47.610490 57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0925 11:30:47.697559 57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0925 11:30:47.697578 57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0925 11:30:47.864150 57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0925 11:30:47.864169 57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0925 11:30:47.915917 57426 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0925 11:30:47.915945 57426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0925 11:30:48.000793 57426 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.50.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.586742998s)
I0925 11:30:48.000836 57426 start.go:923] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
I0925 11:30:48.085411 57426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0925 11:30:48.190617 57426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.485051258s)
I0925 11:30:48.190677 57426 main.go:141] libmachine: Making call to close driver server
I0925 11:30:48.190691 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
I0925 11:30:48.191035 57426 main.go:141] libmachine: Successfully made call to close driver server
I0925 11:30:48.191056 57426 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 11:30:48.191068 57426 main.go:141] libmachine: Making call to close driver server
I0925 11:30:48.191078 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
I0925 11:30:48.192850 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
I0925 11:30:48.192853 57426 main.go:141] libmachine: Successfully made call to close driver server
I0925 11:30:48.192876 57426 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 11:30:48.192885 57426 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-694015"
I0925 11:30:48.465209 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:48.575177 57426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.868034342s)
I0925 11:30:48.575232 57426 main.go:141] libmachine: Making call to close driver server
I0925 11:30:48.575246 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
I0925 11:30:48.575181 57426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.812311763s)
I0925 11:30:48.575317 57426 main.go:141] libmachine: Making call to close driver server
I0925 11:30:48.575328 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
I0925 11:30:48.575540 57426 main.go:141] libmachine: Successfully made call to close driver server
I0925 11:30:48.575560 57426 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 11:30:48.575570 57426 main.go:141] libmachine: Making call to close driver server
I0925 11:30:48.575579 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
I0925 11:30:48.575635 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
I0925 11:30:48.575742 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
I0925 11:30:48.575772 57426 main.go:141] libmachine: Successfully made call to close driver server
I0925 11:30:48.575781 57426 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 11:30:48.575789 57426 main.go:141] libmachine: Making call to close driver server
I0925 11:30:48.575797 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
I0925 11:30:48.575878 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
I0925 11:30:48.575903 57426 main.go:141] libmachine: Successfully made call to close driver server
I0925 11:30:48.575911 57426 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 11:30:48.577345 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
I0925 11:30:48.577384 57426 main.go:141] libmachine: Successfully made call to close driver server
I0925 11:30:48.577406 57426 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 11:30:48.577435 57426 main.go:141] libmachine: Making call to close driver server
I0925 11:30:48.577451 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
I0925 11:30:48.577940 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
I0925 11:30:48.577944 57426 main.go:141] libmachine: Successfully made call to close driver server
I0925 11:30:48.577964 57426 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 11:30:49.298546 57426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.21307781s)
I0925 11:30:49.298606 57426 main.go:141] libmachine: Making call to close driver server
I0925 11:30:49.298628 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
I0925 11:30:49.302266 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
I0925 11:30:49.302272 57426 main.go:141] libmachine: Successfully made call to close driver server
I0925 11:30:49.302307 57426 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 11:30:49.302321 57426 main.go:141] libmachine: Making call to close driver server
I0925 11:30:49.302331 57426 main.go:141] libmachine: (old-k8s-version-694015) Calling .Close
I0925 11:30:49.302655 57426 main.go:141] libmachine: Successfully made call to close driver server
I0925 11:30:49.302695 57426 main.go:141] libmachine: (old-k8s-version-694015) DBG | Closing plugin on server side
I0925 11:30:49.302717 57426 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 11:30:49.304441 57426 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-694015 addons enable metrics-server
I0925 11:30:49.306061 57426 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
I0925 11:30:49.307539 57426 addons.go:502] enable addons completed in 3.258962527s: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
I0925 11:30:50.630959 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:53.128983 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:50.940378 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:53.436796 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:55.437380 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:55.131064 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:57.628873 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:57.449840 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:59.938237 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:30:59.629445 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:02.129311 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:02.438436 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:04.937614 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:04.627904 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:06.629258 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:08.629473 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:06.937878 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:09.437807 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:11.128681 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:13.129731 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:11.939073 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:14.437620 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:15.628774 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:17.630838 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:16.938666 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:19.437732 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:20.139603 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:22.629587 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:21.938151 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:23.938328 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:25.130178 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:27.628803 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:26.439526 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:28.937508 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:29.631037 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:32.128151 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:30.943648 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:33.437428 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:35.438086 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:34.129227 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:36.129294 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:38.629985 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:37.439039 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:39.442448 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:41.129913 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:43.631099 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:41.937237 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:43.939282 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:46.128833 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:48.628446 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:46.438561 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:48.938598 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:50.629674 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:53.129010 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:50.938694 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:52.939141 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:55.438245 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:55.629903 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:58.128851 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:31:57.937434 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:00.437596 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:00.129216 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:02.629241 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:02.437909 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:04.438109 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:04.629284 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:07.128455 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:06.438145 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:08.938681 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:09.129543 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:11.629259 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:11.438436 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:13.438614 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:14.130657 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:16.629579 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:15.938889 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:18.438798 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:19.129812 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:21.630003 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:20.937670 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:22.938056 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:24.938180 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:24.128380 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:26.129010 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:28.630164 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:26.938537 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:28.938993 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:31.127679 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:33.128750 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:30.939782 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:33.438287 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:35.438564 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:35.128786 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:37.129289 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:37.938062 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:40.438394 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:39.129627 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:41.131250 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:43.629234 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:42.439143 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:44.938221 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:45.630527 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:48.128292 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:46.940247 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:48.940644 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:50.128630 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:52.129574 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:51.437686 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:53.438013 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:55.438473 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:54.629843 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:57.128814 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:57.939231 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:00.438636 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:32:59.633169 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:02.129926 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:02.937519 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:04.937631 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:04.629189 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:06.629835 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:08.629868 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:07.436605 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:09.437297 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:11.128030 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:13.128211 59899 pod_ready.go:102] pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:11.438337 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:13.939288 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:14.611278 59899 pod_ready.go:81] duration metric: took 4m0.000327599s waiting for pod "metrics-server-57f55c9bc5-xcns4" in "kube-system" namespace to be "Ready" ...
E0925 11:33:14.611332 59899 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
I0925 11:33:14.611349 59899 pod_ready.go:38] duration metric: took 4m12.007655968s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0925 11:33:14.611376 59899 kubeadm.go:640] restartCluster took 4m31.218254898s
W0925 11:33:14.611443 59899 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
I0925 11:33:14.611477 59899 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
I0925 11:33:15.940496 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:18.440278 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:23.826236 59899 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (9.214737742s)
I0925 11:33:23.826300 59899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0925 11:33:23.840564 59899 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0925 11:33:23.850760 59899 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0925 11:33:23.860161 59899 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0925 11:33:23.860203 59899 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0925 11:33:20.938819 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:22.939228 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:24.940142 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:24.111104 59899 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0925 11:33:27.440968 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:29.937681 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:33.957801 59899 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
I0925 11:33:33.957861 59899 kubeadm.go:322] [preflight] Running pre-flight checks
I0925 11:33:33.957964 59899 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0925 11:33:33.958127 59899 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0925 11:33:33.958257 59899 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0925 11:33:33.958352 59899 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0925 11:33:33.961247 59899 out.go:204] - Generating certificates and keys ...
I0925 11:33:33.961330 59899 kubeadm.go:322] [certs] Using existing ca certificate authority
I0925 11:33:33.961381 59899 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0925 11:33:33.961482 59899 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0925 11:33:33.961584 59899 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
I0925 11:33:33.961691 59899 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
I0925 11:33:33.961764 59899 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
I0925 11:33:33.961860 59899 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
I0925 11:33:33.961946 59899 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
I0925 11:33:33.962038 59899 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0925 11:33:33.962141 59899 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0925 11:33:33.962189 59899 kubeadm.go:322] [certs] Using the existing "sa" key
I0925 11:33:33.962274 59899 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0925 11:33:33.962342 59899 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0925 11:33:33.962404 59899 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0925 11:33:33.962484 59899 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0925 11:33:33.962596 59899 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0925 11:33:33.962722 59899 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0925 11:33:33.962812 59899 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0925 11:33:33.964227 59899 out.go:204] - Booting up control plane ...
I0925 11:33:33.964334 59899 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0925 11:33:33.964411 59899 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0925 11:33:33.964484 59899 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0925 11:33:33.964622 59899 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0925 11:33:33.964767 59899 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0925 11:33:33.964843 59899 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0925 11:33:33.964974 59899 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0925 11:33:33.965033 59899 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.004093 seconds
I0925 11:33:33.965122 59899 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0925 11:33:33.965219 59899 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0925 11:33:33.965300 59899 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
I0925 11:33:33.965551 59899 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-094323 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0925 11:33:33.965631 59899 kubeadm.go:322] [bootstrap-token] Using token: jxl01o.6st4cg36x4e3zwsq
I0925 11:33:33.968152 59899 out.go:204] - Configuring RBAC rules ...
I0925 11:33:33.968255 59899 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0925 11:33:33.968324 59899 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0925 11:33:33.968463 59899 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0925 11:33:33.968579 59899 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0925 11:33:33.968719 59899 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0925 11:33:33.968841 59899 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0925 11:33:33.968990 59899 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0925 11:33:33.969057 59899 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
I0925 11:33:33.969115 59899 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
I0925 11:33:33.969125 59899 kubeadm.go:322]
I0925 11:33:33.969197 59899 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
I0925 11:33:33.969206 59899 kubeadm.go:322]
I0925 11:33:33.969302 59899 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
I0925 11:33:33.969309 59899 kubeadm.go:322]
I0925 11:33:33.969339 59899 kubeadm.go:322] mkdir -p $HOME/.kube
I0925 11:33:33.969409 59899 kubeadm.go:322] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0925 11:33:33.969481 59899 kubeadm.go:322] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0925 11:33:33.969494 59899 kubeadm.go:322]
I0925 11:33:33.969577 59899 kubeadm.go:322] Alternatively, if you are the root user, you can run:
I0925 11:33:33.969592 59899 kubeadm.go:322]
I0925 11:33:33.969652 59899 kubeadm.go:322] export KUBECONFIG=/etc/kubernetes/admin.conf
I0925 11:33:33.969661 59899 kubeadm.go:322]
I0925 11:33:33.969721 59899 kubeadm.go:322] You should now deploy a pod network to the cluster.
I0925 11:33:33.969820 59899 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0925 11:33:33.969931 59899 kubeadm.go:322] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0925 11:33:33.969945 59899 kubeadm.go:322]
I0925 11:33:33.970020 59899 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
I0925 11:33:33.970079 59899 kubeadm.go:322] and service account keys on each node and then running the following as root:
I0925 11:33:33.970085 59899 kubeadm.go:322]
I0925 11:33:33.970149 59899 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token jxl01o.6st4cg36x4e3zwsq \
I0925 11:33:33.970246 59899 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:455a6e1c2932646abd648adc4fff0ce596b942d8b3bd098a2ef2cb3ea084ab54 \
I0925 11:33:33.970273 59899 kubeadm.go:322] --control-plane
I0925 11:33:33.970286 59899 kubeadm.go:322]
I0925 11:33:33.970379 59899 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
I0925 11:33:33.970391 59899 kubeadm.go:322]
I0925 11:33:33.970473 59899 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token jxl01o.6st4cg36x4e3zwsq \
I0925 11:33:33.970561 59899 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:455a6e1c2932646abd648adc4fff0ce596b942d8b3bd098a2ef2cb3ea084ab54
I0925 11:33:33.970570 59899 cni.go:84] Creating CNI manager for ""
I0925 11:33:33.970583 59899 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0925 11:33:33.973276 59899 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0925 11:33:33.974771 59899 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0925 11:33:33.991169 59899 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
I0925 11:33:34.014483 59899 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0925 11:33:34.014576 59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:33:34.014605 59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=1bf6c3d5317028f348e55ea19d261973a6487d3c minikube.k8s.io/name=embed-certs-094323 minikube.k8s.io/updated_at=2023_09_25T11_33_34_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:33:31.938903 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:34.438342 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:34.061656 59899 ops.go:34] apiserver oom_adj: -16
I0925 11:33:34.486947 59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:33:34.586316 59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:33:35.181870 59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:33:35.682572 59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:33:36.182427 59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:33:36.682439 59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:33:37.182278 59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:33:37.682264 59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:33:38.181892 59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:33:38.681964 59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:33:36.938434 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:39.437659 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:39.181618 59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:33:39.682052 59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:33:40.181879 59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:33:40.682579 59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:33:41.182334 59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:33:41.682270 59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:33:42.181757 59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:33:42.682314 59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:33:43.181975 59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:33:43.682310 59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:33:41.438288 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:43.937112 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:44.182254 59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:33:44.682566 59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:33:45.181651 59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:33:45.681891 59899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0925 11:33:45.783591 59899 kubeadm.go:1081] duration metric: took 11.769084129s to wait for elevateKubeSystemPrivileges.
I0925 11:33:45.783631 59899 kubeadm.go:406] StartCluster complete in 5m2.419220731s
I0925 11:33:45.783654 59899 settings.go:142] acquiring lock: {Name:mk372f3d0f6e5777ebfc48341e146821f27f636c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0925 11:33:45.783749 59899 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/17297-6032/kubeconfig
I0925 11:33:45.785139 59899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-6032/kubeconfig: {Name:mk2e6cdf75b548522ce59dabb15b91a1d0336907 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0925 11:33:45.785373 59899 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0925 11:33:45.785497 59899 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
I0925 11:33:45.785578 59899 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-094323"
I0925 11:33:45.785591 59899 addons.go:69] Setting default-storageclass=true in profile "embed-certs-094323"
I0925 11:33:45.785600 59899 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-094323"
W0925 11:33:45.785608 59899 addons.go:240] addon storage-provisioner should already be in state true
I0925 11:33:45.785610 59899 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-094323"
I0925 11:33:45.785613 59899 addons.go:69] Setting metrics-server=true in profile "embed-certs-094323"
I0925 11:33:45.785629 59899 config.go:182] Loaded profile config "embed-certs-094323": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0925 11:33:45.785624 59899 addons.go:69] Setting dashboard=true in profile "embed-certs-094323"
I0925 11:33:45.785641 59899 addons.go:231] Setting addon metrics-server=true in "embed-certs-094323"
I0925 11:33:45.785649 59899 host.go:66] Checking if "embed-certs-094323" exists ...
W0925 11:33:45.785652 59899 addons.go:240] addon metrics-server should already be in state true
I0925 11:33:45.785661 59899 addons.go:231] Setting addon dashboard=true in "embed-certs-094323"
W0925 11:33:45.785671 59899 addons.go:240] addon dashboard should already be in state true
I0925 11:33:45.785702 59899 host.go:66] Checking if "embed-certs-094323" exists ...
I0925 11:33:45.785726 59899 host.go:66] Checking if "embed-certs-094323" exists ...
I0925 11:33:45.785720 59899 cache.go:107] acquiring lock: {Name:mk67fca357e44d730577a3f111223198f60ef976 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0925 11:33:45.785794 59899 cache.go:115] /home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
I0925 11:33:45.785804 59899 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 97.126µs
I0925 11:33:45.785813 59899 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/17297-6032/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
I0925 11:33:45.785842 59899 cache.go:87] Successfully saved all images to host disk.
I0925 11:33:45.786040 59899 config.go:182] Loaded profile config "embed-certs-094323": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0925 11:33:45.786074 59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:33:45.786077 59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:33:45.786103 59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:33:45.786119 59899 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:33:45.786100 59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:33:45.786148 59899 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:33:45.786175 59899 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:33:45.786226 59899 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:33:45.786382 59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:33:45.786458 59899 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:33:45.804658 59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43997
I0925 11:33:45.804729 59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32925
I0925 11:33:45.804829 59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36915
I0925 11:33:45.805237 59899 main.go:141] libmachine: () Calling .GetVersion
I0925 11:33:45.805268 59899 main.go:141] libmachine: () Calling .GetVersion
I0925 11:33:45.805835 59899 main.go:141] libmachine: Using API Version 1
I0925 11:33:45.805855 59899 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:33:45.806126 59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33127
I0925 11:33:45.806245 59899 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:33:45.806461 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetState
I0925 11:33:45.806533 59899 main.go:141] libmachine: () Calling .GetVersion
I0925 11:33:45.806584 59899 main.go:141] libmachine: () Calling .GetVersion
I0925 11:33:45.806593 59899 main.go:141] libmachine: Using API Version 1
I0925 11:33:45.806608 59899 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:33:45.806726 59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41161
I0925 11:33:45.806958 59899 main.go:141] libmachine: Using API Version 1
I0925 11:33:45.806973 59899 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:33:45.807052 59899 main.go:141] libmachine: () Calling .GetVersion
I0925 11:33:45.807117 59899 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:33:45.807146 59899 main.go:141] libmachine: Using API Version 1
I0925 11:33:45.807158 59899 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:33:45.807335 59899 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:33:45.807550 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetState
I0925 11:33:45.807552 59899 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:33:45.807628 59899 main.go:141] libmachine: Using API Version 1
I0925 11:33:45.807655 59899 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:33:45.807678 59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:33:45.807709 59899 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:33:45.808075 59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:33:45.808113 59899 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:33:45.808146 59899 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:33:45.808643 59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:33:45.808695 59899 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:33:45.809669 59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:33:45.809713 59899 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:33:45.815794 59899 addons.go:231] Setting addon default-storageclass=true in "embed-certs-094323"
W0925 11:33:45.815817 59899 addons.go:240] addon default-storageclass should already be in state true
I0925 11:33:45.815845 59899 host.go:66] Checking if "embed-certs-094323" exists ...
I0925 11:33:45.816191 59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:33:45.816218 59899 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:33:45.818468 59899 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-094323" context rescaled to 1 replicas
I0925 11:33:45.818498 59899 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.111 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0925 11:33:45.820484 59899 out.go:177] * Verifying Kubernetes components...
I0925 11:33:45.821970 59899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0925 11:33:45.827608 59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43233
I0925 11:33:45.827764 59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41093
I0925 11:33:45.828140 59899 main.go:141] libmachine: () Calling .GetVersion
I0925 11:33:45.828192 59899 main.go:141] libmachine: () Calling .GetVersion
I0925 11:33:45.828742 59899 main.go:141] libmachine: Using API Version 1
I0925 11:33:45.828756 59899 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:33:45.828865 59899 main.go:141] libmachine: Using API Version 1
I0925 11:33:45.828875 59899 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:33:45.829243 59899 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:33:45.829291 59899 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:33:45.829499 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetState
I0925 11:33:45.829508 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetState
I0925 11:33:45.829541 59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38387
I0925 11:33:45.830368 59899 main.go:141] libmachine: () Calling .GetVersion
I0925 11:33:45.830816 59899 main.go:141] libmachine: Using API Version 1
I0925 11:33:45.830834 59899 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:33:45.830898 59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39429
I0925 11:33:45.831336 59899 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:33:45.831343 59899 main.go:141] libmachine: () Calling .GetVersion
I0925 11:33:45.831544 59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
I0925 11:33:45.831741 59899 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0925 11:33:45.831767 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
I0925 11:33:45.831896 59899 main.go:141] libmachine: Using API Version 1
I0925 11:33:45.831910 59899 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:33:45.831962 59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
I0925 11:33:45.832006 59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
I0925 11:33:45.834683 59899 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0925 11:33:45.833215 59899 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:33:45.835296 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:33:45.836115 59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
I0925 11:33:45.836132 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:33:45.836140 59899 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0925 11:33:45.835941 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
I0925 11:33:45.837552 59899 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0925 11:33:45.837565 59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0925 11:33:45.837580 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
I0925 11:33:45.836081 59899 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0925 11:33:45.837626 59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0925 11:33:45.837640 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
I0925 11:33:45.836328 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetState
I0925 11:33:45.837722 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
I0925 11:33:45.838263 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
I0925 11:33:45.838449 59899 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa Username:docker}
I0925 11:33:45.840153 59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
I0925 11:33:45.841675 59899 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0925 11:33:45.843211 59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44107
I0925 11:33:45.841916 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:33:45.842082 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:33:45.842734 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
I0925 11:33:45.842915 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
I0925 11:33:45.843565 59899 main.go:141] libmachine: () Calling .GetVersion
I0925 11:33:45.844615 59899 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0925 11:33:45.845951 59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0925 11:33:45.845966 59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0925 11:33:45.845980 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
I0925 11:33:45.844700 59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
I0925 11:33:45.844729 59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
I0925 11:33:45.846027 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:33:45.844863 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
I0925 11:33:45.846043 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:33:45.844886 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
I0925 11:33:45.845165 59899 main.go:141] libmachine: Using API Version 1
I0925 11:33:45.846085 59899 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:33:45.846265 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
I0925 11:33:45.846317 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
I0925 11:33:45.846412 59899 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa Username:docker}
I0925 11:33:45.846432 59899 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:33:45.847139 59899 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa Username:docker}
I0925 11:33:45.847153 59899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0925 11:33:45.847192 59899 main.go:141] libmachine: Launching plugin server for driver kvm2
I0925 11:33:45.848989 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:33:45.849283 59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
I0925 11:33:45.849314 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:33:45.849456 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
I0925 11:33:45.849635 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
I0925 11:33:45.849777 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
I0925 11:33:45.849913 59899 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa Username:docker}
I0925 11:33:45.862447 59899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40913
I0925 11:33:45.862828 59899 main.go:141] libmachine: () Calling .GetVersion
I0925 11:33:45.863295 59899 main.go:141] libmachine: Using API Version 1
I0925 11:33:45.863325 59899 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 11:33:45.863706 59899 main.go:141] libmachine: () Calling .GetMachineName
I0925 11:33:45.863888 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetState
I0925 11:33:45.865511 59899 main.go:141] libmachine: (embed-certs-094323) Calling .DriverName
I0925 11:33:45.865802 59899 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
I0925 11:33:45.865821 59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0925 11:33:45.865840 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHHostname
I0925 11:33:45.868353 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:33:45.868774 59899 main.go:141] libmachine: (embed-certs-094323) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:77:47", ip: ""} in network mk-embed-certs-094323: {Iface:virbr1 ExpiryTime:2023-09-25 12:26:57 +0000 UTC Type:0 Mac:52:54:00:07:77:47 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:embed-certs-094323 Clientid:01:52:54:00:07:77:47}
I0925 11:33:45.868808 59899 main.go:141] libmachine: (embed-certs-094323) DBG | domain embed-certs-094323 has defined IP address 192.168.39.111 and MAC address 52:54:00:07:77:47 in network mk-embed-certs-094323
I0925 11:33:45.868936 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHPort
I0925 11:33:45.869132 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHKeyPath
I0925 11:33:45.869260 59899 main.go:141] libmachine: (embed-certs-094323) Calling .GetSSHUsername
I0925 11:33:45.869371 59899 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17297-6032/.minikube/machines/embed-certs-094323/id_rsa Username:docker}
I0925 11:33:46.090766 59899 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0925 11:33:46.090794 59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0925 11:33:46.148251 59899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0925 11:33:46.244486 59899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0925 11:33:46.246747 59899 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0925 11:33:46.246767 59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0925 11:33:46.285706 59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0925 11:33:46.285733 59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0925 11:33:46.399367 59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0925 11:33:46.399389 59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0925 11:33:46.454580 59899 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0925 11:33:46.454598 59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0925 11:33:46.478692 59899 node_ready.go:35] waiting up to 6m0s for node "embed-certs-094323" to be "Ready" ...
I0925 11:33:46.478749 59899 docker.go:664] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/gvisor-addon:2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/coredns/coredns:v1.10.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
-- /stdout --
I0925 11:33:46.478754 59899 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0925 11:33:46.478763 59899 cache_images.go:84] Images are preloaded, skipping loading
I0925 11:33:46.478772 59899 cache_images.go:262] succeeded pushing to: embed-certs-094323
I0925 11:33:46.478777 59899 cache_images.go:263] failed pushing to:
I0925 11:33:46.478797 59899 main.go:141] libmachine: Making call to close driver server
I0925 11:33:46.478821 59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
I0925 11:33:46.479120 59899 main.go:141] libmachine: Successfully made call to close driver server
I0925 11:33:46.479177 59899 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 11:33:46.479190 59899 main.go:141] libmachine: Making call to close driver server
I0925 11:33:46.479200 59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
I0925 11:33:46.479138 59899 main.go:141] libmachine: (embed-certs-094323) DBG | Closing plugin on server side
I0925 11:33:46.479613 59899 main.go:141] libmachine: (embed-certs-094323) DBG | Closing plugin on server side
I0925 11:33:46.479623 59899 main.go:141] libmachine: Successfully made call to close driver server
I0925 11:33:46.479632 59899 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 11:33:46.495731 59899 node_ready.go:49] node "embed-certs-094323" has status "Ready":"True"
I0925 11:33:46.495756 59899 node_ready.go:38] duration metric: took 17.032177ms waiting for node "embed-certs-094323" to be "Ready" ...
I0925 11:33:46.495768 59899 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0925 11:33:46.502666 59899 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-56lj4" in "kube-system" namespace to be "Ready" ...
I0925 11:33:46.590707 59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0925 11:33:46.590728 59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0925 11:33:46.646116 59899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0925 11:33:46.836729 59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0925 11:33:46.836758 59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0925 11:33:47.081956 59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
I0925 11:33:47.081978 59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0925 11:33:47.372971 59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0925 11:33:47.372999 59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0925 11:33:47.548990 59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0925 11:33:47.549016 59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0925 11:33:47.759403 59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0925 11:33:47.759425 59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0925 11:33:48.094571 59899 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0925 11:33:48.094601 59899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0925 11:33:48.300509 59899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0925 11:33:48.523994 59899 pod_ready.go:102] pod "coredns-5dd5756b68-56lj4" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:49.536334 59899 pod_ready.go:92] pod "coredns-5dd5756b68-56lj4" in "kube-system" namespace has status "Ready":"True"
I0925 11:33:49.536354 59899 pod_ready.go:81] duration metric: took 3.03366041s waiting for pod "coredns-5dd5756b68-56lj4" in "kube-system" namespace to be "Ready" ...
I0925 11:33:49.536365 59899 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-pbwqs" in "kube-system" namespace to be "Ready" ...
I0925 11:33:49.539583 59899 pod_ready.go:97] error getting pod "coredns-5dd5756b68-pbwqs" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-pbwqs" not found
I0925 11:33:49.539613 59899 pod_ready.go:81] duration metric: took 3.241249ms waiting for pod "coredns-5dd5756b68-pbwqs" in "kube-system" namespace to be "Ready" ...
E0925 11:33:49.539624 59899 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-pbwqs" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-pbwqs" not found
I0925 11:33:49.539633 59899 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
I0925 11:33:49.549714 59899 pod_ready.go:92] pod "etcd-embed-certs-094323" in "kube-system" namespace has status "Ready":"True"
I0925 11:33:49.549731 59899 pod_ready.go:81] duration metric: took 10.090379ms waiting for pod "etcd-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
I0925 11:33:49.549742 59899 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
I0925 11:33:49.554903 59899 pod_ready.go:92] pod "kube-apiserver-embed-certs-094323" in "kube-system" namespace has status "Ready":"True"
I0925 11:33:49.554917 59899 pod_ready.go:81] duration metric: took 5.167429ms waiting for pod "kube-apiserver-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
I0925 11:33:49.554927 59899 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
I0925 11:33:49.564229 59899 pod_ready.go:92] pod "kube-controller-manager-embed-certs-094323" in "kube-system" namespace has status "Ready":"True"
I0925 11:33:49.564249 59899 pod_ready.go:81] duration metric: took 9.314363ms waiting for pod "kube-controller-manager-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
I0925 11:33:49.564261 59899 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pjwm2" in "kube-system" namespace to be "Ready" ...
I0925 11:33:49.568126 59899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.41983793s)
I0925 11:33:49.568187 59899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.323661752s)
I0925 11:33:49.568232 59899 main.go:141] libmachine: Making call to close driver server
I0925 11:33:49.568239 59899 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.089462417s)
I0925 11:33:49.568251 59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
I0925 11:33:49.568256 59899 start.go:923] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I0925 11:33:49.568301 59899 main.go:141] libmachine: Making call to close driver server
I0925 11:33:49.568319 59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
I0925 11:33:49.568360 59899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.922215522s)
I0925 11:33:49.568392 59899 main.go:141] libmachine: Making call to close driver server
I0925 11:33:49.568407 59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
I0925 11:33:49.568608 59899 main.go:141] libmachine: Successfully made call to close driver server
I0925 11:33:49.568626 59899 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 11:33:49.568637 59899 main.go:141] libmachine: Making call to close driver server
I0925 11:33:49.568643 59899 main.go:141] libmachine: Successfully made call to close driver server
I0925 11:33:49.568674 59899 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 11:33:49.568685 59899 main.go:141] libmachine: Making call to close driver server
I0925 11:33:49.568689 59899 main.go:141] libmachine: Successfully made call to close driver server
I0925 11:33:49.568695 59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
I0925 11:33:49.568697 59899 main.go:141] libmachine: (embed-certs-094323) DBG | Closing plugin on server side
I0925 11:33:49.568704 59899 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 11:33:49.568646 59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
I0925 11:33:49.568716 59899 main.go:141] libmachine: Making call to close driver server
I0925 11:33:49.568725 59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
I0925 11:33:49.568613 59899 main.go:141] libmachine: (embed-certs-094323) DBG | Closing plugin on server side
I0925 11:33:49.568959 59899 main.go:141] libmachine: (embed-certs-094323) DBG | Closing plugin on server side
I0925 11:33:49.568977 59899 main.go:141] libmachine: (embed-certs-094323) DBG | Closing plugin on server side
I0925 11:33:49.569003 59899 main.go:141] libmachine: Successfully made call to close driver server
I0925 11:33:49.569015 59899 main.go:141] libmachine: Successfully made call to close driver server
I0925 11:33:49.569016 59899 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 11:33:49.569024 59899 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 11:33:49.569031 59899 main.go:141] libmachine: Making call to close driver server
I0925 11:33:49.569036 59899 main.go:141] libmachine: Successfully made call to close driver server
I0925 11:33:49.569045 59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
I0925 11:33:49.569048 59899 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 11:33:49.569033 59899 addons.go:467] Verifying addon metrics-server=true in "embed-certs-094323"
I0925 11:33:49.569276 59899 main.go:141] libmachine: Successfully made call to close driver server
I0925 11:33:49.569292 59899 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 11:33:49.883443 59899 pod_ready.go:92] pod "kube-proxy-pjwm2" in "kube-system" namespace has status "Ready":"True"
I0925 11:33:49.883465 59899 pod_ready.go:81] duration metric: took 319.196098ms waiting for pod "kube-proxy-pjwm2" in "kube-system" namespace to be "Ready" ...
I0925 11:33:49.883477 59899 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
I0925 11:33:50.292288 59899 pod_ready.go:92] pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace has status "Ready":"True"
I0925 11:33:50.292314 59899 pod_ready.go:81] duration metric: took 408.829404ms waiting for pod "kube-scheduler-embed-certs-094323" in "kube-system" namespace to be "Ready" ...
I0925 11:33:50.292325 59899 pod_ready.go:38] duration metric: took 3.79654573s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0925 11:33:50.292349 59899 api_server.go:52] waiting for apiserver process to appear ...
I0925 11:33:50.292413 59899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0925 11:33:50.390976 59899 api_server.go:72] duration metric: took 4.572446849s to wait for apiserver process to appear ...
I0925 11:33:50.390998 59899 api_server.go:88] waiting for apiserver healthz status ...
I0925 11:33:50.391016 59899 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
I0925 11:33:50.391107 59899 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.090546724s)
I0925 11:33:50.391160 59899 main.go:141] libmachine: Making call to close driver server
I0925 11:33:50.391179 59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
I0925 11:33:50.391539 59899 main.go:141] libmachine: (embed-certs-094323) DBG | Closing plugin on server side
I0925 11:33:50.391540 59899 main.go:141] libmachine: Successfully made call to close driver server
I0925 11:33:50.391568 59899 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 11:33:50.391584 59899 main.go:141] libmachine: Making call to close driver server
I0925 11:33:50.391594 59899 main.go:141] libmachine: (embed-certs-094323) Calling .Close
I0925 11:33:50.391810 59899 main.go:141] libmachine: Successfully made call to close driver server
I0925 11:33:50.391822 59899 main.go:141] libmachine: (embed-certs-094323) DBG | Closing plugin on server side
I0925 11:33:50.391828 59899 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 11:33:50.393750 59899 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p embed-certs-094323 addons enable metrics-server
I0925 11:33:50.395438 59899 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
I0925 11:33:45.939462 57426 pod_ready.go:102] pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace has status "Ready":"False"
I0925 11:33:47.439176 57426 pod_ready.go:97] node "old-k8s-version-694015" hosting pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-694015" has status "Ready":"False"
I0925 11:33:47.439201 57426 pod_ready.go:81] duration metric: took 3m1.018383263s waiting for pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace to be "Ready" ...
E0925 11:33:47.439210 57426 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-694015" hosting pod "coredns-5644d7b6d9-qnqxm" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-694015" has status "Ready":"False"
I0925 11:33:47.439218 57426 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-rn247" in "kube-system" namespace to be "Ready" ...
I0925 11:33:47.441757 57426 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-rn247" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-rn247" not found
I0925 11:33:47.441785 57426 pod_ready.go:81] duration metric: took 2.55834ms waiting for pod "coredns-5644d7b6d9-rn247" in "kube-system" namespace to be "Ready" ...
E0925 11:33:47.441797 57426 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-rn247" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-rn247" not found
I0925 11:33:47.441806 57426 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gsdzk" in "kube-system" namespace to be "Ready" ...
I0925 11:33:47.447728 57426 pod_ready.go:97] node "old-k8s-version-694015" hosting pod "kube-proxy-gsdzk" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-694015" has status "Ready":"False"
I0925 11:33:47.447759 57426 pod_ready.go:81] duration metric: took 5.944858ms waiting for pod "kube-proxy-gsdzk" in "kube-system" namespace to be "Ready" ...
E0925 11:33:47.447770 57426 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-694015" hosting pod "kube-proxy-gsdzk" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-694015" has status "Ready":"False"
I0925 11:33:47.447777 57426 pod_ready.go:38] duration metric: took 3m1.031173472s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0925 11:33:47.447809 57426 api_server.go:52] waiting for apiserver process to appear ...
I0925 11:33:47.447887 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0925 11:33:47.480326 57426 logs.go:284] 1 containers: [34825b8222f1]
I0925 11:33:47.480410 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0925 11:33:47.500790 57426 logs.go:284] 1 containers: [4b655f8475a9]
I0925 11:33:47.500883 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0925 11:33:47.521967 57426 logs.go:284] 1 containers: [c4e353aa787b]
I0925 11:33:47.522043 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0925 11:33:47.542833 57426 logs.go:284] 1 containers: [08dbfa6061b3]
I0925 11:33:47.542921 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0925 11:33:47.564220 57426 logs.go:284] 1 containers: [2bccdb65c1cc]
I0925 11:33:47.564296 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0925 11:33:47.585142 57426 logs.go:284] 1 containers: [59225a8740b7]
I0925 11:33:47.585233 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0925 11:33:47.604606 57426 logs.go:284] 0 containers: []
W0925 11:33:47.604638 57426 logs.go:286] No container was found matching "kindnet"
I0925 11:33:47.604734 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0925 11:33:47.634903 57426 logs.go:284] 1 containers: [0f9de8bda7fb]
I0925 11:33:47.634987 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0925 11:33:47.659599 57426 logs.go:284] 1 containers: [90dc66317fc1]
I0925 11:33:47.659654 57426 logs.go:123] Gathering logs for kube-apiserver [34825b8222f1] ...
I0925 11:33:47.659677 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34825b8222f1"
I0925 11:33:47.713402 57426 logs.go:123] Gathering logs for kube-scheduler [08dbfa6061b3] ...
I0925 11:33:47.713441 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dbfa6061b3"
I0925 11:33:47.746308 57426 logs.go:123] Gathering logs for kubernetes-dashboard [0f9de8bda7fb] ...
I0925 11:33:47.746347 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f9de8bda7fb"
I0925 11:33:47.777953 57426 logs.go:123] Gathering logs for describe nodes ...
I0925 11:33:47.777991 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0925 11:33:47.933013 57426 logs.go:123] Gathering logs for coredns [c4e353aa787b] ...
I0925 11:33:47.933041 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4e353aa787b"
I0925 11:33:47.959588 57426 logs.go:123] Gathering logs for storage-provisioner [90dc66317fc1] ...
I0925 11:33:47.959623 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90dc66317fc1"
I0925 11:33:47.989240 57426 logs.go:123] Gathering logs for container status ...
I0925 11:33:47.989285 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0925 11:33:48.069991 57426 logs.go:123] Gathering logs for kubelet ...
I0925 11:33:48.070022 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0925 11:33:48.107511 57426 logs.go:138] Found kubelet problem: Sep 25 11:25:49 old-k8s-version-694015 kubelet[1664]: E0925 11:25:49.602400 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
W0925 11:33:48.108197 57426 logs.go:138] Found kubelet problem: Sep 25 11:25:50 old-k8s-version-694015 kubelet[1664]: E0925 11:25:50.619464 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.108438 57426 logs.go:138] Found kubelet problem: Sep 25 11:25:51 old-k8s-version-694015 kubelet[1664]: E0925 11:25:51.661072 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.108657 57426 logs.go:138] Found kubelet problem: Sep 25 11:26:02 old-k8s-version-694015 kubelet[1664]: E0925 11:26:02.792940 1664 pod_workers.go:191] Error syncing pod ecfa3d77-460f-4a09-b035-18707c06fed3 ("storage-provisioner_kube-system(ecfa3d77-460f-4a09-b035-18707c06fed3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ecfa3d77-460f-4a09-b035-18707c06fed3)"
W0925 11:33:48.109661 57426 logs.go:138] Found kubelet problem: Sep 25 11:26:05 old-k8s-version-694015 kubelet[1664]: E0925 11:26:05.020444 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
W0925 11:33:48.109891 57426 logs.go:138] Found kubelet problem: Sep 25 11:26:19 old-k8s-version-694015 kubelet[1664]: E0925 11:26:19.003368 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.110800 57426 logs.go:138] Found kubelet problem: Sep 25 11:26:32 old-k8s-version-694015 kubelet[1664]: E0925 11:26:32.051177 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
W0925 11:33:48.111045 57426 logs.go:138] Found kubelet problem: Sep 25 11:26:45 old-k8s-version-694015 kubelet[1664]: E0925 11:26:45.004295 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.111291 57426 logs.go:138] Found kubelet problem: Sep 25 11:26:58 old-k8s-version-694015 kubelet[1664]: E0925 11:26:58.003759 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.111524 57426 logs.go:138] Found kubelet problem: Sep 25 11:27:12 old-k8s-version-694015 kubelet[1664]: E0925 11:27:12.004264 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.112518 57426 logs.go:138] Found kubelet problem: Sep 25 11:27:27 old-k8s-version-694015 kubelet[1664]: E0925 11:27:27.023076 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
W0925 11:33:48.112765 57426 logs.go:138] Found kubelet problem: Sep 25 11:27:39 old-k8s-version-694015 kubelet[1664]: E0925 11:27:39.006534 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.112989 57426 logs.go:138] Found kubelet problem: Sep 25 11:27:53 old-k8s-version-694015 kubelet[1664]: E0925 11:27:53.006724 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.113221 57426 logs.go:138] Found kubelet problem: Sep 25 11:28:05 old-k8s-version-694015 kubelet[1664]: E0925 11:28:05.004093 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.113444 57426 logs.go:138] Found kubelet problem: Sep 25 11:28:20 old-k8s-version-694015 kubelet[1664]: E0925 11:28:20.003435 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.113656 57426 logs.go:138] Found kubelet problem: Sep 25 11:28:31 old-k8s-version-694015 kubelet[1664]: E0925 11:28:31.004553 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.113877 57426 logs.go:138] Found kubelet problem: Sep 25 11:28:42 old-k8s-version-694015 kubelet[1664]: E0925 11:28:42.007858 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.114848 57426 logs.go:138] Found kubelet problem: Sep 25 11:28:57 old-k8s-version-694015 kubelet[1664]: E0925 11:28:57.022019 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
W0925 11:33:48.115076 57426 logs.go:138] Found kubelet problem: Sep 25 11:29:10 old-k8s-version-694015 kubelet[1664]: E0925 11:29:10.005118 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.115297 57426 logs.go:138] Found kubelet problem: Sep 25 11:29:22 old-k8s-version-694015 kubelet[1664]: E0925 11:29:22.006659 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.115517 57426 logs.go:138] Found kubelet problem: Sep 25 11:29:34 old-k8s-version-694015 kubelet[1664]: E0925 11:29:34.004156 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.115743 57426 logs.go:138] Found kubelet problem: Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.115978 57426 logs.go:138] Found kubelet problem: Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.116194 57426 logs.go:138] Found kubelet problem: Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.148933 57426 logs.go:138] Found kubelet problem: Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939 6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
W0925 11:33:48.150648 57426 logs.go:138] Found kubelet problem: Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950 6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
I0925 11:33:48.152304 57426 logs.go:123] Gathering logs for dmesg ...
I0925 11:33:48.152321 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0925 11:33:48.170706 57426 logs.go:123] Gathering logs for etcd [4b655f8475a9] ...
I0925 11:33:48.170735 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b655f8475a9"
I0925 11:33:48.204533 57426 logs.go:123] Gathering logs for kube-proxy [2bccdb65c1cc] ...
I0925 11:33:48.204574 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bccdb65c1cc"
I0925 11:33:48.242201 57426 logs.go:123] Gathering logs for kube-controller-manager [59225a8740b7] ...
I0925 11:33:48.242239 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59225a8740b7"
I0925 11:33:48.305874 57426 logs.go:123] Gathering logs for Docker ...
I0925 11:33:48.305916 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0925 11:33:48.375041 57426 out.go:309] Setting ErrFile to fd 2...
I0925 11:33:48.375074 57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0925 11:33:48.375130 57426 out.go:239] X Problems detected in kubelet:
W0925 11:33:48.375142 57426 out.go:239] Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.375161 57426 out.go:239] Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.375169 57426 out.go:239] Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:48.375176 57426 out.go:239] Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939 6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
W0925 11:33:48.375185 57426 out.go:239] Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950 6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
I0925 11:33:48.375190 57426 out.go:309] Setting ErrFile to fd 2...
I0925 11:33:48.375199 57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 11:33:50.396708 59899 addons.go:502] enable addons completed in 4.611221618s: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
I0925 11:33:50.409202 59899 api_server.go:279] https://192.168.39.111:8443/healthz returned 200:
ok
I0925 11:33:50.411339 59899 api_server.go:141] control plane version: v1.28.2
I0925 11:33:50.411356 59899 api_server.go:131] duration metric: took 20.35197ms to wait for apiserver health ...
I0925 11:33:50.411366 59899 system_pods.go:43] waiting for kube-system pods to appear ...
I0925 11:33:50.490420 59899 system_pods.go:59] 8 kube-system pods found
I0925 11:33:50.490453 59899 system_pods.go:61] "coredns-5dd5756b68-56lj4" [447db0fe-7ec3-443c-9219-f6520653ae3f] Running
I0925 11:33:50.490461 59899 system_pods.go:61] "etcd-embed-certs-094323" [48127edf-44a2-46ac-b5db-c1d47f97c3a5] Running
I0925 11:33:50.490468 59899 system_pods.go:61] "kube-apiserver-embed-certs-094323" [3a47c725-2ede-48c8-a825-e3d1f90710f2] Running
I0925 11:33:50.490476 59899 system_pods.go:61] "kube-controller-manager-embed-certs-094323" [8692df25-5b4e-424b-8ae0-aedd5f249b98] Running
I0925 11:33:50.490483 59899 system_pods.go:61] "kube-proxy-pjwm2" [845a56ac-d0b3-4331-aa60-8d473ca65a44] Running
I0925 11:33:50.490489 59899 system_pods.go:61] "kube-scheduler-embed-certs-094323" [12968319-1047-4b1d-a54f-7c192604a75d] Running
I0925 11:33:50.490500 59899 system_pods.go:61] "metrics-server-57f55c9bc5-5xjw8" [5634c692-d7e5-49d5-a39a-3473e5f58d58] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:33:50.490515 59899 system_pods.go:61] "storage-provisioner" [913ce54f-ebcc-4b9c-bf76-ff0139a1b44f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:33:50.490528 59899 system_pods.go:74] duration metric: took 79.155444ms to wait for pod list to return data ...
I0925 11:33:50.490540 59899 default_sa.go:34] waiting for default service account to be created ...
I0925 11:33:50.691794 59899 default_sa.go:45] found service account: "default"
I0925 11:33:50.691828 59899 default_sa.go:55] duration metric: took 201.27577ms for default service account to be created ...
I0925 11:33:50.691838 59899 system_pods.go:116] waiting for k8s-apps to be running ...
I0925 11:33:50.887600 59899 system_pods.go:86] 8 kube-system pods found
I0925 11:33:50.887636 59899 system_pods.go:89] "coredns-5dd5756b68-56lj4" [447db0fe-7ec3-443c-9219-f6520653ae3f] Running
I0925 11:33:50.887645 59899 system_pods.go:89] "etcd-embed-certs-094323" [48127edf-44a2-46ac-b5db-c1d47f97c3a5] Running
I0925 11:33:50.887652 59899 system_pods.go:89] "kube-apiserver-embed-certs-094323" [3a47c725-2ede-48c8-a825-e3d1f90710f2] Running
I0925 11:33:50.887662 59899 system_pods.go:89] "kube-controller-manager-embed-certs-094323" [8692df25-5b4e-424b-8ae0-aedd5f249b98] Running
I0925 11:33:50.887668 59899 system_pods.go:89] "kube-proxy-pjwm2" [845a56ac-d0b3-4331-aa60-8d473ca65a44] Running
I0925 11:33:50.887675 59899 system_pods.go:89] "kube-scheduler-embed-certs-094323" [12968319-1047-4b1d-a54f-7c192604a75d] Running
I0925 11:33:50.887683 59899 system_pods.go:89] "metrics-server-57f55c9bc5-5xjw8" [5634c692-d7e5-49d5-a39a-3473e5f58d58] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:33:50.887694 59899 system_pods.go:89] "storage-provisioner" [913ce54f-ebcc-4b9c-bf76-ff0139a1b44f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:33:50.887707 59899 system_pods.go:126] duration metric: took 195.862461ms to wait for k8s-apps to be running ...
I0925 11:33:50.887718 59899 system_svc.go:44] waiting for kubelet service to be running ....
I0925 11:33:50.887769 59899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0925 11:33:50.910382 59899 system_svc.go:56] duration metric: took 22.655864ms WaitForService to wait for kubelet.
I0925 11:33:50.910410 59899 kubeadm.go:581] duration metric: took 5.091888107s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0925 11:33:50.910429 59899 node_conditions.go:102] verifying NodePressure condition ...
I0925 11:33:51.083597 59899 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0925 11:33:51.083633 59899 node_conditions.go:123] node cpu capacity is 2
I0925 11:33:51.083648 59899 node_conditions.go:105] duration metric: took 173.214402ms to run NodePressure ...
I0925 11:33:51.083660 59899 start.go:228] waiting for startup goroutines ...
I0925 11:33:51.083670 59899 start.go:233] waiting for cluster config update ...
I0925 11:33:51.083682 59899 start.go:242] writing updated cluster config ...
I0925 11:33:51.084016 59899 ssh_runner.go:195] Run: rm -f paused
I0925 11:33:51.130189 59899 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
I0925 11:33:51.132357 59899 out.go:177] * Done! kubectl is now configured to use "embed-certs-094323" cluster and "default" namespace by default
I0925 11:33:58.376816 57426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0925 11:33:58.397417 57426 api_server.go:72] duration metric: took 3m12.267407933s to wait for apiserver process to appear ...
I0925 11:33:58.397443 57426 api_server.go:88] waiting for apiserver healthz status ...
I0925 11:33:58.397517 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0925 11:33:58.423312 57426 logs.go:284] 1 containers: [34825b8222f1]
I0925 11:33:58.423385 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0925 11:33:58.443439 57426 logs.go:284] 1 containers: [4b655f8475a9]
I0925 11:33:58.443499 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0925 11:33:58.463360 57426 logs.go:284] 1 containers: [c4e353aa787b]
I0925 11:33:58.463443 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0925 11:33:58.486151 57426 logs.go:284] 1 containers: [08dbfa6061b3]
I0925 11:33:58.486228 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0925 11:33:58.507009 57426 logs.go:284] 1 containers: [2bccdb65c1cc]
I0925 11:33:58.507095 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0925 11:33:58.525571 57426 logs.go:284] 1 containers: [59225a8740b7]
I0925 11:33:58.525647 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0925 11:33:58.542397 57426 logs.go:284] 0 containers: []
W0925 11:33:58.542424 57426 logs.go:286] No container was found matching "kindnet"
I0925 11:33:58.542481 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0925 11:33:58.562186 57426 logs.go:284] 1 containers: [0f9de8bda7fb]
I0925 11:33:58.562260 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0925 11:33:58.580984 57426 logs.go:284] 1 containers: [90dc66317fc1]
I0925 11:33:58.581014 57426 logs.go:123] Gathering logs for describe nodes ...
I0925 11:33:58.581030 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0925 11:33:58.731921 57426 logs.go:123] Gathering logs for kube-proxy [2bccdb65c1cc] ...
I0925 11:33:58.731958 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bccdb65c1cc"
I0925 11:33:58.759982 57426 logs.go:123] Gathering logs for Docker ...
I0925 11:33:58.760017 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0925 11:33:58.817088 57426 logs.go:123] Gathering logs for kubelet ...
I0925 11:33:58.817120 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0925 11:33:58.851581 57426 logs.go:138] Found kubelet problem: Sep 25 11:25:49 old-k8s-version-694015 kubelet[1664]: E0925 11:25:49.602400 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
W0925 11:33:58.852006 57426 logs.go:138] Found kubelet problem: Sep 25 11:25:50 old-k8s-version-694015 kubelet[1664]: E0925 11:25:50.619464 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:58.852226 57426 logs.go:138] Found kubelet problem: Sep 25 11:25:51 old-k8s-version-694015 kubelet[1664]: E0925 11:25:51.661072 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:58.852405 57426 logs.go:138] Found kubelet problem: Sep 25 11:26:02 old-k8s-version-694015 kubelet[1664]: E0925 11:26:02.792940 1664 pod_workers.go:191] Error syncing pod ecfa3d77-460f-4a09-b035-18707c06fed3 ("storage-provisioner_kube-system(ecfa3d77-460f-4a09-b035-18707c06fed3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ecfa3d77-460f-4a09-b035-18707c06fed3)"
W0925 11:33:58.853080 57426 logs.go:138] Found kubelet problem: Sep 25 11:26:05 old-k8s-version-694015 kubelet[1664]: E0925 11:26:05.020444 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
W0925 11:33:58.853245 57426 logs.go:138] Found kubelet problem: Sep 25 11:26:19 old-k8s-version-694015 kubelet[1664]: E0925 11:26:19.003368 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:58.853866 57426 logs.go:138] Found kubelet problem: Sep 25 11:26:32 old-k8s-version-694015 kubelet[1664]: E0925 11:26:32.051177 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
W0925 11:33:58.854027 57426 logs.go:138] Found kubelet problem: Sep 25 11:26:45 old-k8s-version-694015 kubelet[1664]: E0925 11:26:45.004295 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:58.854211 57426 logs.go:138] Found kubelet problem: Sep 25 11:26:58 old-k8s-version-694015 kubelet[1664]: E0925 11:26:58.003759 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:58.854408 57426 logs.go:138] Found kubelet problem: Sep 25 11:27:12 old-k8s-version-694015 kubelet[1664]: E0925 11:27:12.004264 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:58.855047 57426 logs.go:138] Found kubelet problem: Sep 25 11:27:27 old-k8s-version-694015 kubelet[1664]: E0925 11:27:27.023076 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
W0925 11:33:58.855223 57426 logs.go:138] Found kubelet problem: Sep 25 11:27:39 old-k8s-version-694015 kubelet[1664]: E0925 11:27:39.006534 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:58.855403 57426 logs.go:138] Found kubelet problem: Sep 25 11:27:53 old-k8s-version-694015 kubelet[1664]: E0925 11:27:53.006724 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:58.855601 57426 logs.go:138] Found kubelet problem: Sep 25 11:28:05 old-k8s-version-694015 kubelet[1664]: E0925 11:28:05.004093 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:58.855811 57426 logs.go:138] Found kubelet problem: Sep 25 11:28:20 old-k8s-version-694015 kubelet[1664]: E0925 11:28:20.003435 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:58.856008 57426 logs.go:138] Found kubelet problem: Sep 25 11:28:31 old-k8s-version-694015 kubelet[1664]: E0925 11:28:31.004553 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:58.856210 57426 logs.go:138] Found kubelet problem: Sep 25 11:28:42 old-k8s-version-694015 kubelet[1664]: E0925 11:28:42.007858 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:58.856868 57426 logs.go:138] Found kubelet problem: Sep 25 11:28:57 old-k8s-version-694015 kubelet[1664]: E0925 11:28:57.022019 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
W0925 11:33:58.857032 57426 logs.go:138] Found kubelet problem: Sep 25 11:29:10 old-k8s-version-694015 kubelet[1664]: E0925 11:29:10.005118 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:58.857219 57426 logs.go:138] Found kubelet problem: Sep 25 11:29:22 old-k8s-version-694015 kubelet[1664]: E0925 11:29:22.006659 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:58.857418 57426 logs.go:138] Found kubelet problem: Sep 25 11:29:34 old-k8s-version-694015 kubelet[1664]: E0925 11:29:34.004156 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:58.857616 57426 logs.go:138] Found kubelet problem: Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:58.857814 57426 logs.go:138] Found kubelet problem: Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:58.858011 57426 logs.go:138] Found kubelet problem: Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:58.889357 57426 logs.go:138] Found kubelet problem: Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939 6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
W0925 11:33:58.891108 57426 logs.go:138] Found kubelet problem: Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950 6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
I0925 11:33:58.893160 57426 logs.go:123] Gathering logs for kube-scheduler [08dbfa6061b3] ...
I0925 11:33:58.893178 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dbfa6061b3"
I0925 11:33:58.927223 57426 logs.go:123] Gathering logs for etcd [4b655f8475a9] ...
I0925 11:33:58.927264 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b655f8475a9"
I0925 11:33:58.951343 57426 logs.go:123] Gathering logs for coredns [c4e353aa787b] ...
I0925 11:33:58.951376 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4e353aa787b"
I0925 11:33:58.979268 57426 logs.go:123] Gathering logs for kubernetes-dashboard [0f9de8bda7fb] ...
I0925 11:33:58.979303 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f9de8bda7fb"
I0925 11:33:59.010031 57426 logs.go:123] Gathering logs for kube-apiserver [34825b8222f1] ...
I0925 11:33:59.010059 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34825b8222f1"
I0925 11:33:59.050333 57426 logs.go:123] Gathering logs for kube-controller-manager [59225a8740b7] ...
I0925 11:33:59.050367 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59225a8740b7"
I0925 11:33:59.093782 57426 logs.go:123] Gathering logs for storage-provisioner [90dc66317fc1] ...
I0925 11:33:59.093820 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90dc66317fc1"
I0925 11:33:59.118196 57426 logs.go:123] Gathering logs for container status ...
I0925 11:33:59.118222 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0925 11:33:59.228267 57426 logs.go:123] Gathering logs for dmesg ...
I0925 11:33:59.228306 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0925 11:33:59.247426 57426 out.go:309] Setting ErrFile to fd 2...
I0925 11:33:59.247459 57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0925 11:33:59.247517 57426 out.go:239] X Problems detected in kubelet:
W0925 11:33:59.247534 57426 out.go:239] Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:59.247545 57426 out.go:239] Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:59.247554 57426 out.go:239] Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:33:59.247563 57426 out.go:239] Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939 6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
W0925 11:33:59.247574 57426 out.go:239] Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950 6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
I0925 11:33:59.247584 57426 out.go:309] Setting ErrFile to fd 2...
I0925 11:33:59.247597 57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 11:34:09.249955 57426 api_server.go:253] Checking apiserver healthz at https://192.168.50.17:8443/healthz ...
I0925 11:34:09.256612 57426 api_server.go:279] https://192.168.50.17:8443/healthz returned 200:
ok
I0925 11:34:09.257809 57426 api_server.go:141] control plane version: v1.16.0
I0925 11:34:09.257827 57426 api_server.go:131] duration metric: took 10.860379501s to wait for apiserver health ...
I0925 11:34:09.257833 57426 system_pods.go:43] waiting for kube-system pods to appear ...
I0925 11:34:09.257883 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0925 11:34:09.280149 57426 logs.go:284] 1 containers: [34825b8222f1]
I0925 11:34:09.280233 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0925 11:34:09.300127 57426 logs.go:284] 1 containers: [4b655f8475a9]
I0925 11:34:09.300211 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0925 11:34:09.332581 57426 logs.go:284] 1 containers: [c4e353aa787b]
I0925 11:34:09.332656 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0925 11:34:09.352994 57426 logs.go:284] 1 containers: [08dbfa6061b3]
I0925 11:34:09.353061 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0925 11:34:09.374892 57426 logs.go:284] 1 containers: [2bccdb65c1cc]
I0925 11:34:09.374960 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0925 11:34:09.395820 57426 logs.go:284] 1 containers: [59225a8740b7]
I0925 11:34:09.395884 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0925 11:34:09.414225 57426 logs.go:284] 0 containers: []
W0925 11:34:09.414245 57426 logs.go:286] No container was found matching "kindnet"
I0925 11:34:09.414284 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0925 11:34:09.434336 57426 logs.go:284] 1 containers: [0f9de8bda7fb]
I0925 11:34:09.434398 57426 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0925 11:34:09.456185 57426 logs.go:284] 1 containers: [90dc66317fc1]
I0925 11:34:09.456218 57426 logs.go:123] Gathering logs for describe nodes ...
I0925 11:34:09.456231 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0925 11:34:09.590378 57426 logs.go:123] Gathering logs for kube-scheduler [08dbfa6061b3] ...
I0925 11:34:09.590409 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08dbfa6061b3"
I0925 11:34:09.617599 57426 logs.go:123] Gathering logs for kube-proxy [2bccdb65c1cc] ...
I0925 11:34:09.617624 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bccdb65c1cc"
I0925 11:34:09.643431 57426 logs.go:123] Gathering logs for kubernetes-dashboard [0f9de8bda7fb] ...
I0925 11:34:09.643459 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f9de8bda7fb"
I0925 11:34:09.665103 57426 logs.go:123] Gathering logs for etcd [4b655f8475a9] ...
I0925 11:34:09.665129 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b655f8475a9"
I0925 11:34:09.693931 57426 logs.go:123] Gathering logs for kube-controller-manager [59225a8740b7] ...
I0925 11:34:09.693963 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59225a8740b7"
I0925 11:34:09.742784 57426 logs.go:123] Gathering logs for Docker ...
I0925 11:34:09.742812 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0925 11:34:09.804145 57426 logs.go:123] Gathering logs for dmesg ...
I0925 11:34:09.804177 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0925 11:34:09.818586 57426 logs.go:123] Gathering logs for kube-apiserver [34825b8222f1] ...
I0925 11:34:09.818609 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34825b8222f1"
I0925 11:34:09.857846 57426 logs.go:123] Gathering logs for coredns [c4e353aa787b] ...
I0925 11:34:09.857875 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4e353aa787b"
I0925 11:34:09.880799 57426 logs.go:123] Gathering logs for container status ...
I0925 11:34:09.880828 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0925 11:34:09.950547 57426 logs.go:123] Gathering logs for kubelet ...
I0925 11:34:09.950572 57426 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0925 11:34:09.983084 57426 logs.go:138] Found kubelet problem: Sep 25 11:25:49 old-k8s-version-694015 kubelet[1664]: E0925 11:25:49.602400 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
W0925 11:34:09.983479 57426 logs.go:138] Found kubelet problem: Sep 25 11:25:50 old-k8s-version-694015 kubelet[1664]: E0925 11:25:50.619464 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:09.983617 57426 logs.go:138] Found kubelet problem: Sep 25 11:25:51 old-k8s-version-694015 kubelet[1664]: E0925 11:25:51.661072 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:09.983758 57426 logs.go:138] Found kubelet problem: Sep 25 11:26:02 old-k8s-version-694015 kubelet[1664]: E0925 11:26:02.792940 1664 pod_workers.go:191] Error syncing pod ecfa3d77-460f-4a09-b035-18707c06fed3 ("storage-provisioner_kube-system(ecfa3d77-460f-4a09-b035-18707c06fed3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ecfa3d77-460f-4a09-b035-18707c06fed3)"
W0925 11:34:09.984405 57426 logs.go:138] Found kubelet problem: Sep 25 11:26:05 old-k8s-version-694015 kubelet[1664]: E0925 11:26:05.020444 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
W0925 11:34:09.984547 57426 logs.go:138] Found kubelet problem: Sep 25 11:26:19 old-k8s-version-694015 kubelet[1664]: E0925 11:26:19.003368 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:09.985367 57426 logs.go:138] Found kubelet problem: Sep 25 11:26:32 old-k8s-version-694015 kubelet[1664]: E0925 11:26:32.051177 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
W0925 11:34:09.985576 57426 logs.go:138] Found kubelet problem: Sep 25 11:26:45 old-k8s-version-694015 kubelet[1664]: E0925 11:26:45.004295 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:09.985713 57426 logs.go:138] Found kubelet problem: Sep 25 11:26:58 old-k8s-version-694015 kubelet[1664]: E0925 11:26:58.003759 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:09.985898 57426 logs.go:138] Found kubelet problem: Sep 25 11:27:12 old-k8s-version-694015 kubelet[1664]: E0925 11:27:12.004264 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:09.986632 57426 logs.go:138] Found kubelet problem: Sep 25 11:27:27 old-k8s-version-694015 kubelet[1664]: E0925 11:27:27.023076 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
W0925 11:34:09.986786 57426 logs.go:138] Found kubelet problem: Sep 25 11:27:39 old-k8s-version-694015 kubelet[1664]: E0925 11:27:39.006534 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:09.986945 57426 logs.go:138] Found kubelet problem: Sep 25 11:27:53 old-k8s-version-694015 kubelet[1664]: E0925 11:27:53.006724 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:09.987132 57426 logs.go:138] Found kubelet problem: Sep 25 11:28:05 old-k8s-version-694015 kubelet[1664]: E0925 11:28:05.004093 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:09.987279 57426 logs.go:138] Found kubelet problem: Sep 25 11:28:20 old-k8s-version-694015 kubelet[1664]: E0925 11:28:20.003435 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:09.987469 57426 logs.go:138] Found kubelet problem: Sep 25 11:28:31 old-k8s-version-694015 kubelet[1664]: E0925 11:28:31.004553 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:09.987663 57426 logs.go:138] Found kubelet problem: Sep 25 11:28:42 old-k8s-version-694015 kubelet[1664]: E0925 11:28:42.007858 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:09.988255 57426 logs.go:138] Found kubelet problem: Sep 25 11:28:57 old-k8s-version-694015 kubelet[1664]: E0925 11:28:57.022019 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
W0925 11:34:09.988398 57426 logs.go:138] Found kubelet problem: Sep 25 11:29:10 old-k8s-version-694015 kubelet[1664]: E0925 11:29:10.005118 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:09.988533 57426 logs.go:138] Found kubelet problem: Sep 25 11:29:22 old-k8s-version-694015 kubelet[1664]: E0925 11:29:22.006659 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:09.988685 57426 logs.go:138] Found kubelet problem: Sep 25 11:29:34 old-k8s-version-694015 kubelet[1664]: E0925 11:29:34.004156 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:09.988822 57426 logs.go:138] Found kubelet problem: Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:09.988958 57426 logs.go:138] Found kubelet problem: Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:09.989093 57426 logs.go:138] Found kubelet problem: Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:10.020550 57426 logs.go:138] Found kubelet problem: Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939 6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
W0925 11:34:10.022302 57426 logs.go:138] Found kubelet problem: Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950 6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
I0925 11:34:10.024541 57426 logs.go:123] Gathering logs for storage-provisioner [90dc66317fc1] ...
I0925 11:34:10.024558 57426 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 90dc66317fc1"
I0925 11:34:10.053454 57426 out.go:309] Setting ErrFile to fd 2...
I0925 11:34:10.053477 57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
W0925 11:34:10.053524 57426 out.go:239] X Problems detected in kubelet:
W0925 11:34:10.053535 57426 out.go:239] Sep 25 11:29:48 old-k8s-version-694015 kubelet[1664]: E0925 11:29:48.004789 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:10.053543 57426 out.go:239] Sep 25 11:30:00 old-k8s-version-694015 kubelet[1664]: E0925 11:30:00.004900 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:10.053551 57426 out.go:239] Sep 25 11:30:12 old-k8s-version-694015 kubelet[1664]: E0925 11:30:12.003540 1664 pod_workers.go:191] Error syncing pod 84a78e90-f876-4f01-8cc9-fb5ab93dceec ("metrics-server-74d5856cc6-mknft_kube-system(84a78e90-f876-4f01-8cc9-fb5ab93dceec)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0925 11:34:10.053557 57426 out.go:239] Sep 25 11:30:48 old-k8s-version-694015 kubelet[6852]: E0925 11:30:48.696939 6852 reflector.go:123] object-"kube-system"/"storage-provisioner-token-jvfjd": Failed to list *v1.Secret: secrets "storage-provisioner-token-jvfjd" is forbidden: User "system:node:old-k8s-version-694015" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "old-k8s-version-694015" and this object
W0925 11:34:10.053563 57426 out.go:239] Sep 25 11:30:49 old-k8s-version-694015 kubelet[6852]: E0925 11:30:49.783950 6852 pod_workers.go:191] Error syncing pod 5925c507-8225-4b9c-b89e-13346451d090 ("metrics-server-74d5856cc6-wbskx_kube-system(5925c507-8225-4b9c-b89e-13346451d090)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
I0925 11:34:10.053568 57426 out.go:309] Setting ErrFile to fd 2...
I0925 11:34:10.053573 57426 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 11:34:20.061232 57426 system_pods.go:59] 8 kube-system pods found
I0925 11:34:20.061260 57426 system_pods.go:61] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:34:20.061267 57426 system_pods.go:61] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:34:20.061271 57426 system_pods.go:61] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:34:20.061277 57426 system_pods.go:61] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:34:20.061284 57426 system_pods.go:61] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:34:20.061288 57426 system_pods.go:61] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:34:20.061295 57426 system_pods.go:61] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:34:20.061300 57426 system_pods.go:61] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:34:20.061307 57426 system_pods.go:74] duration metric: took 10.803468736s to wait for pod list to return data ...
I0925 11:34:20.061314 57426 default_sa.go:34] waiting for default service account to be created ...
I0925 11:34:20.064090 57426 default_sa.go:45] found service account: "default"
I0925 11:34:20.064114 57426 default_sa.go:55] duration metric: took 2.793638ms for default service account to be created ...
I0925 11:34:20.064123 57426 system_pods.go:116] waiting for k8s-apps to be running ...
I0925 11:34:20.068614 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:34:20.068644 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:34:20.068653 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:34:20.068674 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:34:20.068682 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:34:20.068690 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:34:20.068696 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:34:20.068707 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:34:20.068719 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:34:20.068739 57426 retry.go:31] will retry after 201.15744ms: missing components: kube-dns, kube-proxy
I0925 11:34:20.275900 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:34:20.275943 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:34:20.275952 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:34:20.275960 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:34:20.275967 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:34:20.275974 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:34:20.275982 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:34:20.275992 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:34:20.276001 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:34:20.276021 57426 retry.go:31] will retry after 295.538203ms: missing components: kube-dns, kube-proxy
I0925 11:34:20.579425 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:34:20.579469 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:34:20.579480 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:34:20.579489 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:34:20.579497 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:34:20.579506 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:34:20.579513 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:34:20.579522 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:34:20.579531 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:34:20.579553 57426 retry.go:31] will retry after 438.061345ms: missing components: kube-dns, kube-proxy
I0925 11:34:21.024313 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:34:21.024351 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:34:21.024360 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:34:21.024365 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:34:21.024372 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:34:21.024381 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:34:21.024390 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:34:21.024401 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:34:21.024411 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:34:21.024428 57426 retry.go:31] will retry after 504.61622ms: missing components: kube-dns, kube-proxy
I0925 11:34:21.536419 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:34:21.536449 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:34:21.536460 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:34:21.536466 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:34:21.536470 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:34:21.536476 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:34:21.536480 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:34:21.536486 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:34:21.536492 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:34:21.536506 57426 retry.go:31] will retry after 484.39135ms: missing components: kube-dns, kube-proxy
I0925 11:34:22.027728 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:34:22.027766 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:34:22.027776 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:34:22.027783 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:34:22.027787 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:34:22.027796 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:34:22.027804 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:34:22.027814 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:34:22.027822 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:34:22.027838 57426 retry.go:31] will retry after 680.21989ms: missing components: kube-dns, kube-proxy
I0925 11:34:22.714282 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:34:22.714315 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:34:22.714326 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:34:22.714335 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:34:22.714342 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:34:22.714349 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:34:22.714354 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:34:22.714365 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:34:22.714381 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:34:22.714399 57426 retry.go:31] will retry after 719.383007ms: missing components: kube-dns, kube-proxy
I0925 11:34:23.438829 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:34:23.438855 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:34:23.438862 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:34:23.438867 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:34:23.438872 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:34:23.438877 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:34:23.438882 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:34:23.438891 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:34:23.438898 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:34:23.438912 57426 retry.go:31] will retry after 1.277927153s: missing components: kube-dns, kube-proxy
I0925 11:34:24.724821 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:34:24.724855 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:34:24.724864 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:34:24.724871 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:34:24.724878 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:34:24.724887 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:34:24.724894 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:34:24.724904 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:34:24.724919 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:34:24.724942 57426 retry.go:31] will retry after 1.757108265s: missing components: kube-dns, kube-proxy
I0925 11:34:26.488127 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:34:26.488156 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:34:26.488163 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:34:26.488182 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:34:26.488203 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:34:26.488213 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:34:26.488222 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:34:26.488232 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:34:26.488247 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:34:26.488266 57426 retry.go:31] will retry after 1.427718537s: missing components: kube-dns, kube-proxy
I0925 11:34:27.921755 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:34:27.921783 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:34:27.921790 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:34:27.921795 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:34:27.921800 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:34:27.921805 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:34:27.921810 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:34:27.921815 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:34:27.921821 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:34:27.921835 57426 retry.go:31] will retry after 1.957734881s: missing components: kube-dns, kube-proxy
I0925 11:34:29.885748 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:34:29.885776 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:34:29.885783 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:34:29.885789 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:34:29.885794 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:34:29.885799 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:34:29.885803 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:34:29.885810 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:34:29.885815 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:34:29.885830 57426 retry.go:31] will retry after 3.054467533s: missing components: kube-dns, kube-proxy
I0925 11:34:32.946353 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:34:32.946383 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:34:32.946391 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:34:32.946396 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:34:32.946401 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:34:32.946406 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:34:32.946410 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:34:32.946416 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:34:32.946421 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:34:32.946434 57426 retry.go:31] will retry after 3.761041339s: missing components: kube-dns, kube-proxy
I0925 11:34:36.713729 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:34:36.713754 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:34:36.713761 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:34:36.713767 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:34:36.713772 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:34:36.713777 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:34:36.713781 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:34:36.713788 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:34:36.713793 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:34:36.713807 57426 retry.go:31] will retry after 4.734467176s: missing components: kube-dns, kube-proxy
I0925 11:34:41.454464 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:34:41.454492 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:34:41.454498 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:34:41.454503 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:34:41.454508 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:34:41.454513 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:34:41.454518 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:34:41.454524 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:34:41.454529 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:34:41.454542 57426 retry.go:31] will retry after 4.698913888s: missing components: kube-dns, kube-proxy
I0925 11:34:46.159214 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:34:46.159255 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:34:46.159266 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:34:46.159275 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:34:46.159282 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:34:46.159292 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:34:46.159299 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:34:46.159314 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:34:46.159328 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:34:46.159350 57426 retry.go:31] will retry after 5.507304477s: missing components: kube-dns, kube-proxy
I0925 11:34:51.672849 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:34:51.672877 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:34:51.672884 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:34:51.672889 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:34:51.672894 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:34:51.672899 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:34:51.672905 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:34:51.672914 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:34:51.672919 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:34:51.672933 57426 retry.go:31] will retry after 8.254229342s: missing components: kube-dns, kube-proxy
I0925 11:34:59.936057 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:34:59.936086 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:34:59.936094 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:34:59.936099 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:34:59.936104 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:34:59.936109 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:34:59.936114 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:34:59.936119 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:34:59.936125 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:34:59.936139 57426 retry.go:31] will retry after 9.535060954s: missing components: kube-dns, kube-proxy
I0925 11:35:09.479385 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:35:09.479413 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:35:09.479420 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:35:09.479428 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:35:09.479433 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:35:09.479441 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:35:09.479446 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:35:09.479452 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:35:09.479459 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:35:09.479471 57426 retry.go:31] will retry after 13.479799453s: missing components: kube-dns, kube-proxy
I0925 11:35:22.964926 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:35:22.964955 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:35:22.964962 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:35:22.964967 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:35:22.964972 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:35:22.964977 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:35:22.964982 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:35:22.964988 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:35:22.964993 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:35:22.965006 57426 retry.go:31] will retry after 14.199608167s: missing components: kube-dns, kube-proxy
I0925 11:35:37.171988 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:35:37.172022 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:35:37.172034 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:35:37.172041 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:35:37.172048 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:35:37.172055 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:35:37.172061 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:35:37.172072 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:35:37.172083 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:35:37.172101 57426 retry.go:31] will retry after 17.274040235s: missing components: kube-dns, kube-proxy
I0925 11:35:54.452675 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:35:54.452702 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:35:54.452709 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:35:54.452714 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:35:54.452719 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:35:54.452727 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:35:54.452731 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:35:54.452738 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:35:54.452743 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:35:54.452756 57426 retry.go:31] will retry after 28.29436119s: missing components: kube-dns, kube-proxy
I0925 11:36:22.755662 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:36:22.755700 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:36:22.755710 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:36:22.755718 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:36:22.755724 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:36:22.755732 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:36:22.755746 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:36:22.755761 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:36:22.755771 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:36:22.755791 57426 retry.go:31] will retry after 35.525659438s: missing components: kube-dns, kube-proxy
I0925 11:36:58.289849 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:36:58.289887 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:36:58.289896 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:36:58.289901 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:36:58.289910 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:36:58.289919 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:36:58.289927 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:36:58.289939 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:36:58.289950 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:36:58.289971 57426 retry.go:31] will retry after 44.058995008s: missing components: kube-dns, kube-proxy
I0925 11:37:42.356673 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:37:42.356698 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:37:42.356705 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:37:42.356710 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:37:42.356715 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:37:42.356721 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:37:42.356725 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:37:42.356731 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:37:42.356736 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:37:42.356752 57426 retry.go:31] will retry after 47.757072258s: missing components: kube-dns, kube-proxy
I0925 11:38:30.124408 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:38:30.124436 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:38:30.124443 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:38:30.124449 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:38:30.124454 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:38:30.124459 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:38:30.124464 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:38:30.124470 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:38:30.124475 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:38:30.124490 57426 retry.go:31] will retry after 48.54868015s: missing components: kube-dns, kube-proxy
I0925 11:39:18.680525 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:39:18.680555 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:39:18.680561 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:39:18.680567 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:39:18.680572 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:39:18.680578 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:39:18.680582 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:39:18.680589 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:39:18.680594 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:39:18.680607 57426 retry.go:31] will retry after 53.095866632s: missing components: kube-dns, kube-proxy
I0925 11:40:11.783486 57426 system_pods.go:86] 8 kube-system pods found
I0925 11:40:11.783513 57426 system_pods.go:89] "coredns-5644d7b6d9-qnqxm" [f5167272-c4e6-438f-ba45-f977df42bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0925 11:40:11.783520 57426 system_pods.go:89] "etcd-old-k8s-version-694015" [9cfaa418-12ab-4d9e-ba12-064f7d859508] Running
I0925 11:40:11.783527 57426 system_pods.go:89] "kube-apiserver-old-k8s-version-694015" [7a1c1b13-02e5-4963-b0c2-6a8a487de2c9] Running
I0925 11:40:11.783532 57426 system_pods.go:89] "kube-controller-manager-old-k8s-version-694015" [6f3e2cb4-ec9a-4f2f-be75-4676e8dd3c26] Running
I0925 11:40:11.783537 57426 system_pods.go:89] "kube-proxy-gsdzk" [d183e6c3-2cf1-46d4-a9ff-e03c97aa161c] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0925 11:40:11.783542 57426 system_pods.go:89] "kube-scheduler-old-k8s-version-694015" [99e5005e-b087-4140-8740-50da156dc62d] Running
I0925 11:40:11.783548 57426 system_pods.go:89] "metrics-server-74d5856cc6-wbskx" [5925c507-8225-4b9c-b89e-13346451d090] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0925 11:40:11.783553 57426 system_pods.go:89] "storage-provisioner" [c74c1aa8-7249-477e-8ef9-1bcaf418ad03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0925 11:40:11.786119 57426 out.go:177]
W0925 11:40:11.787697 57426 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns, kube-proxy
W0925 11:40:11.787711 57426 out.go:239] *
W0925 11:40:11.788461 57426 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0925 11:40:11.790057 57426 out.go:177]
*
* ==> Docker <==
* -- Journal begins at Mon 2023-09-25 11:24:53 UTC, ends at Mon 2023-09-25 11:40:12 UTC. --
Sep 25 11:30:50 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:50.572406518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep 25 11:30:50 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:50.572497492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 25 11:30:50 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:50.572525871Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep 25 11:30:50 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:50.572544812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 25 11:30:50 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:50.618491365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep 25 11:30:50 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:50.618680379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 25 11:30:50 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:50.618696521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep 25 11:30:50 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:50.618704838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 25 11:30:51 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:51.155674989Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep 25 11:30:51 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:51.155883992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 25 11:30:51 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:51.156004251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep 25 11:30:51 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:51.156243152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 25 11:30:56 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:56.045907108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep 25 11:30:56 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:56.046033975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 25 11:30:56 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:56.046090982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep 25 11:30:56 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:56.046108215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 25 11:30:56 old-k8s-version-694015 dockerd[1190]: time="2023-09-25T11:30:56.109068079Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Sep 25 11:30:56 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:56.462862941Z" level=info msg="shim disconnected" id=5d3673792ccfc336b8935a34b5a443284dc8b677eebf5137a219cccc3c403f5d namespace=moby
Sep 25 11:30:56 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:56.462964770Z" level=warning msg="cleaning up after shim disconnected" id=5d3673792ccfc336b8935a34b5a443284dc8b677eebf5137a219cccc3c403f5d namespace=moby
Sep 25 11:30:56 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:30:56.462982909Z" level=info msg="cleaning up dead shim" namespace=moby
Sep 25 11:30:56 old-k8s-version-694015 dockerd[1190]: time="2023-09-25T11:30:56.463078511Z" level=info msg="ignoring event" container=5d3673792ccfc336b8935a34b5a443284dc8b677eebf5137a219cccc3c403f5d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Sep 25 11:31:02 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:31:02.824501229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep 25 11:31:02 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:31:02.824684623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 25 11:31:02 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:31:02.824701374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep 25 11:31:02 old-k8s-version-694015 dockerd[1199]: time="2023-09-25T11:31:02.824713075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
*
* ==> container status <==
* CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0f9de8bda7fb kubernetesui/dashboard "/dashboard --insecu…" 9 minutes ago Up 9 minutes k8s_kubernetes-dashboard_kubernetes-dashboard-84b68f675b-z674w_kubernetes-dashboard_5d234114-a13f-403f-98e0-7b5fbf830fdd_0
5d3673792ccf registry.k8s.io/echoserver "nginx -g 'daemon of…" 9 minutes ago Exited (1) 9 minutes ago k8s_dashboard-metrics-scraper_dashboard-metrics-scraper-d6b4b5544-mxvxx_kubernetes-dashboard_da3f5657-7e9d-4ba7-b42a-d92a2b5fd683_0
90dc66317fc1 6e38f40d628d "/storage-provisioner" 9 minutes ago Up 9 minutes k8s_storage-provisioner_storage-provisioner_kube-system_c74c1aa8-7249-477e-8ef9-1bcaf418ad03_0
b16fb26ba287 k8s.gcr.io/pause:3.1 "/pause" 9 minutes ago Up 9 minutes k8s_POD_storage-provisioner_kube-system_c74c1aa8-7249-477e-8ef9-1bcaf418ad03_0
4eb82cb0fa23 k8s.gcr.io/pause:3.1 "/pause" 9 minutes ago Up 9 minutes k8s_POD_kubernetes-dashboard-84b68f675b-z674w_kubernetes-dashboard_5d234114-a13f-403f-98e0-7b5fbf830fdd_0
802d2fbd8809 k8s.gcr.io/pause:3.1 "/pause" 9 minutes ago Up 9 minutes k8s_POD_dashboard-metrics-scraper-d6b4b5544-mxvxx_kubernetes-dashboard_da3f5657-7e9d-4ba7-b42a-d92a2b5fd683_0
6a94e2e5690b k8s.gcr.io/pause:3.1 "/pause" 9 minutes ago Up 9 minutes k8s_POD_metrics-server-74d5856cc6-wbskx_kube-system_5925c507-8225-4b9c-b89e-13346451d090_0
c4e353aa787b bf261d157914 "/coredns -conf /etc…" 9 minutes ago Up 9 minutes k8s_coredns_coredns-5644d7b6d9-qnqxm_kube-system_f5167272-c4e6-438f-ba45-f977df42bc3b_0
2bccdb65c1cc c21b0c7400f9 "/usr/local/bin/kube…" 9 minutes ago Up 9 minutes k8s_kube-proxy_kube-proxy-gsdzk_kube-system_d183e6c3-2cf1-46d4-a9ff-e03c97aa161c_0
2088f3a7c0bc k8s.gcr.io/pause:3.1 "/pause" 9 minutes ago Up 9 minutes k8s_POD_kube-proxy-gsdzk_kube-system_d183e6c3-2cf1-46d4-a9ff-e03c97aa161c_0
75c3319baa09 k8s.gcr.io/pause:3.1 "/pause" 9 minutes ago Up 9 minutes k8s_POD_coredns-5644d7b6d9-qnqxm_kube-system_f5167272-c4e6-438f-ba45-f977df42bc3b_0
eb63d31189ed k8s.gcr.io/pause:3.1 "/pause" 9 minutes ago Created k8s_POD_coredns-5644d7b6d9-rn247_kube-system_f0e633d0-75fb-4406-928a-ec680c4052fa_0
4b655f8475a9 b2756210eeab "etcd --advertise-cl…" 9 minutes ago Up 9 minutes k8s_etcd_etcd-old-k8s-version-694015_kube-system_319810d3e321e4b27bff365f5661410b_0
08dbfa6061b3 301ddc62b80b "kube-scheduler --au…" 9 minutes ago Up 9 minutes k8s_kube-scheduler_kube-scheduler-old-k8s-version-694015_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
59225a8740b7 06a629a7e51c "kube-controller-man…" 9 minutes ago Up 9 minutes k8s_kube-controller-manager_kube-controller-manager-old-k8s-version-694015_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
34825b8222f1 b305571ca60a "kube-apiserver --ad…" 9 minutes ago Up 9 minutes k8s_kube-apiserver_kube-apiserver-old-k8s-version-694015_kube-system_ea8f9e449dd1304167590b964553922a_0
5b274efecb4d k8s.gcr.io/pause:3.1 "/pause" 9 minutes ago Up 9 minutes k8s_POD_etcd-old-k8s-version-694015_kube-system_319810d3e321e4b27bff365f5661410b_0
6e623a69a033 k8s.gcr.io/pause:3.1 "/pause" 9 minutes ago Up 9 minutes k8s_POD_kube-scheduler-old-k8s-version-694015_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
961cf08898d9 k8s.gcr.io/pause:3.1 "/pause" 9 minutes ago Up 9 minutes k8s_POD_kube-controller-manager-old-k8s-version-694015_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
713ec26ea888 k8s.gcr.io/pause:3.1 "/pause" 9 minutes ago Up 9 minutes k8s_POD_kube-apiserver-old-k8s-version-694015_kube-system_ea8f9e449dd1304167590b964553922a_0
time="2023-09-25T11:40:12Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
*
* ==> coredns [c4e353aa787b] <==
* .:53
2023-09-25T11:30:47.501Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
2023-09-25T11:30:47.501Z [INFO] CoreDNS-1.6.2
2023-09-25T11:30:47.501Z [INFO] linux/amd64, go1.12.8, 795a3eb
CoreDNS-1.6.2
linux/amd64, go1.12.8, 795a3eb
*
* ==> describe nodes <==
* Name: old-k8s-version-694015
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=old-k8s-version-694015
kubernetes.io/os=linux
minikube.k8s.io/commit=1bf6c3d5317028f348e55ea19d261973a6487d3c
minikube.k8s.io/name=old-k8s-version-694015
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_09_25T11_30_30_0700
minikube.k8s.io/version=v1.31.2
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 25 Sep 2023 11:30:26 +0000
Taints: node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 25 Sep 2023 11:40:08 +0000 Mon, 25 Sep 2023 11:30:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 25 Sep 2023 11:40:08 +0000 Mon, 25 Sep 2023 11:30:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 25 Sep 2023 11:40:08 +0000 Mon, 25 Sep 2023 11:30:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Mon, 25 Sep 2023 11:40:08 +0000 Mon, 25 Sep 2023 11:33:47 +0000 KubeletNotReady PLEG is not healthy: pleg was last seen active 9m22.343926768s ago; threshold is 3m0s
Addresses:
InternalIP: 192.168.50.17
Hostname: old-k8s-version-694015
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2165900Ki
pods: 110
System Info:
Machine ID: 1bd5d978d1e543b686646b2c32f30862
System UUID: 1bd5d978-d1e5-43b6-8664-6b2c32f30862
Boot ID: 5678d5b5-5910-4d2d-a245-2b8fc64bd779
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://24.0.6
Kubelet Version: v1.16.0
Kube-Proxy Version: v1.16.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (10 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-5644d7b6d9-qnqxm 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 9m27s
kube-system etcd-old-k8s-version-694015 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m23s
kube-system kube-apiserver-old-k8s-version-694015 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m25s
kube-system kube-controller-manager-old-k8s-version-694015 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m25s
kube-system kube-proxy-gsdzk 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m27s
kube-system kube-scheduler-old-k8s-version-694015 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m20s
kube-system metrics-server-74d5856cc6-wbskx 100m (5%!)(MISSING) 0 (0%!)(MISSING) 200Mi (9%!)(MISSING) 0 (0%!)(MISSING) 9m23s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m24s
kubernetes-dashboard dashboard-metrics-scraper-d6b4b5544-mxvxx 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m23s
kubernetes-dashboard kubernetes-dashboard-84b68f675b-z674w 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m22s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING)
memory 270Mi (12%!)(MISSING) 170Mi (8%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 9m53s (x8 over 9m54s) kubelet, old-k8s-version-694015 Node old-k8s-version-694015 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 9m53s (x8 over 9m54s) kubelet, old-k8s-version-694015 Node old-k8s-version-694015 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 9m53s (x7 over 9m54s) kubelet, old-k8s-version-694015 Node old-k8s-version-694015 status is now: NodeHasSufficientPID
Normal Starting 9m25s kube-proxy, old-k8s-version-694015 Starting kube-proxy.
*
* ==> dmesg <==
* [ +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[ +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ +0.076891] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +4.528148] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +3.807712] systemd-fstab-generator[114]: Ignoring "noauto" for root device
[ +0.166866] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
[ +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +2.627379] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[Sep25 11:25] systemd-fstab-generator[508]: Ignoring "noauto" for root device
[ +0.112649] systemd-fstab-generator[519]: Ignoring "noauto" for root device
[ +1.250517] systemd-fstab-generator[879]: Ignoring "noauto" for root device
[ +0.395221] systemd-fstab-generator[917]: Ignoring "noauto" for root device
[ +0.132329] systemd-fstab-generator[928]: Ignoring "noauto" for root device
[ +0.148539] systemd-fstab-generator[941]: Ignoring "noauto" for root device
[ +6.146658] systemd-fstab-generator[1181]: Ignoring "noauto" for root device
[ +1.531877] kauditd_printk_skb: 67 callbacks suppressed
[ +13.077793] systemd-fstab-generator[1658]: Ignoring "noauto" for root device
[ +0.487565] kauditd_printk_skb: 29 callbacks suppressed
[ +0.199945] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
[ +24.809912] kauditd_printk_skb: 5 callbacks suppressed
[Sep25 11:26] hrtimer: interrupt took 6685373 ns
[Sep25 11:30] systemd-fstab-generator[6846]: Ignoring "noauto" for root device
[Sep25 11:31] kauditd_printk_skb: 5 callbacks suppressed
*
* ==> etcd [4b655f8475a9] <==
* 2023-09-25 11:30:21.297192 I | etcdserver: initial cluster = old-k8s-version-694015=https://192.168.50.17:2380
2023-09-25 11:30:21.310739 I | etcdserver: starting member a74ab9f845be4a88 in cluster e7a7808069af5882
2023-09-25 11:30:21.310817 I | raft: a74ab9f845be4a88 became follower at term 0
2023-09-25 11:30:21.348667 I | raft: newRaft a74ab9f845be4a88 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2023-09-25 11:30:21.348787 I | raft: a74ab9f845be4a88 became follower at term 1
2023-09-25 11:30:21.595167 W | auth: simple token is not cryptographically signed
2023-09-25 11:30:21.604807 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
2023-09-25 11:30:21.607417 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2023-09-25 11:30:21.608224 I | etcdserver: a74ab9f845be4a88 as single-node; fast-forwarding 9 ticks (election ticks 10)
2023-09-25 11:30:21.609008 I | etcdserver/membership: added member a74ab9f845be4a88 [https://192.168.50.17:2380] to cluster e7a7808069af5882
2023-09-25 11:30:21.609764 I | embed: listening for metrics on http://127.0.0.1:2381
2023-09-25 11:30:21.610013 I | embed: listening for metrics on http://192.168.50.17:2381
2023-09-25 11:30:22.316022 I | raft: a74ab9f845be4a88 is starting a new election at term 1
2023-09-25 11:30:22.316075 I | raft: a74ab9f845be4a88 became candidate at term 2
2023-09-25 11:30:22.316089 I | raft: a74ab9f845be4a88 received MsgVoteResp from a74ab9f845be4a88 at term 2
2023-09-25 11:30:22.316099 I | raft: a74ab9f845be4a88 became leader at term 2
2023-09-25 11:30:22.316104 I | raft: raft.node: a74ab9f845be4a88 elected leader a74ab9f845be4a88 at term 2
2023-09-25 11:30:22.316356 I | etcdserver: setting up the initial cluster version to 3.3
2023-09-25 11:30:22.318109 N | etcdserver/membership: set the initial cluster version to 3.3
2023-09-25 11:30:22.318162 I | etcdserver/api: enabled capabilities for version 3.3
2023-09-25 11:30:22.318191 I | etcdserver: published {Name:old-k8s-version-694015 ClientURLs:[https://192.168.50.17:2379]} to cluster e7a7808069af5882
2023-09-25 11:30:22.318197 I | embed: ready to serve client requests
2023-09-25 11:30:22.318821 I | embed: ready to serve client requests
2023-09-25 11:30:22.319844 I | embed: serving client requests on 127.0.0.1:2379
2023-09-25 11:30:22.319991 I | embed: serving client requests on 192.168.50.17:2379
*
* ==> kernel <==
* 11:40:12 up 15 min, 0 users, load average: 0.27, 0.37, 0.27
Linux old-k8s-version-694015 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kube-apiserver [34825b8222f1] <==
* I0925 11:31:49.979903 1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
W0925 11:31:49.979987 1 handler_proxy.go:99] no RequestInfo found in the context
E0925 11:31:49.980034 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0925 11:31:49.980118 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0925 11:33:49.980819 1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
W0925 11:33:49.981054 1 handler_proxy.go:99] no RequestInfo found in the context
E0925 11:33:49.981162 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0925 11:33:49.981270 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0925 11:35:26.965809 1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
W0925 11:35:26.965948 1 handler_proxy.go:99] no RequestInfo found in the context
E0925 11:35:26.966022 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0925 11:35:26.966030 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0925 11:36:26.966408 1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
W0925 11:36:26.966779 1 handler_proxy.go:99] no RequestInfo found in the context
E0925 11:36:26.966986 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0925 11:36:26.967121 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0925 11:38:26.967894 1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
W0925 11:38:26.968064 1 handler_proxy.go:99] no RequestInfo found in the context
E0925 11:38:26.968162 1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0925 11:38:26.968198 1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
*
* ==> kube-controller-manager [59225a8740b7] <==
* I0925 11:33:50.382473 1 node_lifecycle_controller.go:1058] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
W0925 11:33:57.898753 1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0925 11:34:17.667175 1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W0925 11:34:29.900850 1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0925 11:34:47.919904 1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W0925 11:35:01.902850 1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0925 11:35:18.172387 1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W0925 11:35:33.904989 1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0925 11:35:48.424547 1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W0925 11:36:05.907379 1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0925 11:36:18.676868 1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W0925 11:36:37.909138 1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0925 11:36:48.932033 1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W0925 11:37:09.911153 1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0925 11:37:19.184303 1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W0925 11:37:41.913226 1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0925 11:37:49.436394 1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W0925 11:38:13.915534 1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0925 11:38:19.688419 1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W0925 11:38:45.924819 1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0925 11:38:49.940696 1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W0925 11:39:17.927265 1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0925 11:39:20.192628 1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W0925 11:39:49.929359 1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0925 11:39:50.444391 1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
*
* ==> kube-proxy [2bccdb65c1cc] <==
* W0925 11:30:47.128400 1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
I0925 11:30:47.177538 1 node.go:135] Successfully retrieved node IP: 192.168.50.17
I0925 11:30:47.177648 1 server_others.go:149] Using iptables Proxier.
I0925 11:30:47.271820 1 server.go:529] Version: v1.16.0
I0925 11:30:47.304914 1 config.go:313] Starting service config controller
I0925 11:30:47.305050 1 shared_informer.go:197] Waiting for caches to sync for service config
I0925 11:30:47.305152 1 config.go:131] Starting endpoints config controller
I0925 11:30:47.305167 1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I0925 11:30:47.424722 1 shared_informer.go:204] Caches are synced for endpoints config
I0925 11:30:47.424968 1 shared_informer.go:204] Caches are synced for service config
*
* ==> kube-scheduler [08dbfa6061b3] <==
* W0925 11:30:25.965118 1 authentication.go:79] Authentication is disabled
I0925 11:30:25.965128 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0925 11:30:25.969940 1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
E0925 11:30:26.032268 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0925 11:30:26.032513 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0925 11:30:26.034880 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0925 11:30:26.035163 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0925 11:30:26.035326 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0925 11:30:26.035758 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0925 11:30:26.041977 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0925 11:30:26.042199 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0925 11:30:26.042371 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0925 11:30:26.043936 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0925 11:30:26.044107 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0925 11:30:27.035540 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0925 11:30:27.039764 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0925 11:30:27.039841 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0925 11:30:27.044797 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0925 11:30:27.047742 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0925 11:30:27.047784 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0925 11:30:27.049796 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0925 11:30:27.051510 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0925 11:30:27.054657 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0925 11:30:27.058480 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0925 11:30:27.061633 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
*
* ==> kubelet <==
* -- Journal begins at Mon 2023-09-25 11:24:53 UTC, ends at Mon 2023-09-25 11:40:13 UTC. --
Sep 25 11:38:08 old-k8s-version-694015 kubelet[6852]: I0925 11:38:08.080055 6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m21.857503263s ago; threshold is 3m0s
Sep 25 11:38:13 old-k8s-version-694015 kubelet[6852]: I0925 11:38:13.080380 6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m26.857823167s ago; threshold is 3m0s
Sep 25 11:38:18 old-k8s-version-694015 kubelet[6852]: I0925 11:38:18.080741 6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m31.858155337s ago; threshold is 3m0s
Sep 25 11:38:23 old-k8s-version-694015 kubelet[6852]: I0925 11:38:23.081649 6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m36.859004603s ago; threshold is 3m0s
Sep 25 11:38:28 old-k8s-version-694015 kubelet[6852]: I0925 11:38:28.082433 6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m41.859872366s ago; threshold is 3m0s
Sep 25 11:38:33 old-k8s-version-694015 kubelet[6852]: I0925 11:38:33.083425 6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m46.860872476s ago; threshold is 3m0s
Sep 25 11:38:38 old-k8s-version-694015 kubelet[6852]: I0925 11:38:38.084178 6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m51.86163424s ago; threshold is 3m0s
Sep 25 11:38:43 old-k8s-version-694015 kubelet[6852]: I0925 11:38:43.085023 6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 7m56.862471059s ago; threshold is 3m0s
Sep 25 11:38:48 old-k8s-version-694015 kubelet[6852]: I0925 11:38:48.085439 6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m1.862884367s ago; threshold is 3m0s
Sep 25 11:38:53 old-k8s-version-694015 kubelet[6852]: I0925 11:38:53.085770 6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m6.863221874s ago; threshold is 3m0s
Sep 25 11:38:58 old-k8s-version-694015 kubelet[6852]: I0925 11:38:58.086030 6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m11.863489755s ago; threshold is 3m0s
Sep 25 11:39:03 old-k8s-version-694015 kubelet[6852]: I0925 11:39:03.086684 6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m16.864149459s ago; threshold is 3m0s
Sep 25 11:39:08 old-k8s-version-694015 kubelet[6852]: I0925 11:39:08.086940 6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m21.864399202s ago; threshold is 3m0s
Sep 25 11:39:13 old-k8s-version-694015 kubelet[6852]: I0925 11:39:13.087347 6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m26.864795058s ago; threshold is 3m0s
Sep 25 11:39:18 old-k8s-version-694015 kubelet[6852]: I0925 11:39:18.087708 6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m31.865164287s ago; threshold is 3m0s
Sep 25 11:39:23 old-k8s-version-694015 kubelet[6852]: I0925 11:39:23.088620 6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m36.866021478s ago; threshold is 3m0s
Sep 25 11:39:28 old-k8s-version-694015 kubelet[6852]: I0925 11:39:28.089544 6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m41.867001241s ago; threshold is 3m0s
Sep 25 11:39:33 old-k8s-version-694015 kubelet[6852]: I0925 11:39:33.090422 6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m46.867863356s ago; threshold is 3m0s
Sep 25 11:39:38 old-k8s-version-694015 kubelet[6852]: I0925 11:39:38.091175 6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m51.868631697s ago; threshold is 3m0s
Sep 25 11:39:43 old-k8s-version-694015 kubelet[6852]: I0925 11:39:43.091473 6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 8m56.868932531s ago; threshold is 3m0s
Sep 25 11:39:48 old-k8s-version-694015 kubelet[6852]: I0925 11:39:48.091888 6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 9m1.86934497s ago; threshold is 3m0s
Sep 25 11:39:53 old-k8s-version-694015 kubelet[6852]: I0925 11:39:53.092820 6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 9m6.870276979s ago; threshold is 3m0s
Sep 25 11:39:58 old-k8s-version-694015 kubelet[6852]: I0925 11:39:58.093478 6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 9m11.870931398s ago; threshold is 3m0s
Sep 25 11:40:03 old-k8s-version-694015 kubelet[6852]: I0925 11:40:03.093775 6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 9m16.871233114s ago; threshold is 3m0s
Sep 25 11:40:08 old-k8s-version-694015 kubelet[6852]: I0925 11:40:08.094530 6852 kubelet.go:1839] skipping pod synchronization - PLEG is not healthy: pleg was last seen active 9m21.871914466s ago; threshold is 3m0s
*
* ==> kubernetes-dashboard [0f9de8bda7fb] <==
* 2023/09/25 11:31:02 Generating JWE encryption key
2023/09/25 11:31:02 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2023/09/25 11:31:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2023/09/25 11:31:03 Initializing JWE encryption key from synchronized object
2023/09/25 11:31:03 Creating in-cluster Sidecar client
2023/09/25 11:31:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2023/09/25 11:31:03 Serving insecurely on HTTP port: 9090
2023/09/25 11:31:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2023/09/25 11:32:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2023/09/25 11:32:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2023/09/25 11:33:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2023/09/25 11:33:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2023/09/25 11:34:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2023/09/25 11:34:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2023/09/25 11:35:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2023/09/25 11:35:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2023/09/25 11:36:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2023/09/25 11:36:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2023/09/25 11:37:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2023/09/25 11:37:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2023/09/25 11:38:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2023/09/25 11:38:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2023/09/25 11:39:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2023/09/25 11:39:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2023/09/25 11:40:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
*
* ==> storage-provisioner [90dc66317fc1] <==
* I0925 11:30:51.322039 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0925 11:30:51.347548 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0925 11:30:51.348062 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0925 11:30:51.364910 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0925 11:30:51.365497 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-694015_c2b9f123-e72a-43cd-8aaf-531be42e41fa!
I0925 11:30:51.368701 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"82068dcb-41ed-493c-a127-6ea04652eda5", APIVersion:"v1", ResourceVersion:"485", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-694015_c2b9f123-e72a-43cd-8aaf-531be42e41fa became leader
I0925 11:30:51.466721 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-694015_c2b9f123-e72a-43cd-8aaf-531be42e41fa!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-694015 -n old-k8s-version-694015
helpers_test.go:261: (dbg) Run: kubectl --context old-k8s-version-694015 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: coredns-5644d7b6d9-qnqxm kube-proxy-gsdzk metrics-server-74d5856cc6-wbskx storage-provisioner dashboard-metrics-scraper-d6b4b5544-mxvxx kubernetes-dashboard-84b68f675b-z674w
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context old-k8s-version-694015 describe pod coredns-5644d7b6d9-qnqxm kube-proxy-gsdzk metrics-server-74d5856cc6-wbskx storage-provisioner dashboard-metrics-scraper-d6b4b5544-mxvxx kubernetes-dashboard-84b68f675b-z674w
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-694015 describe pod coredns-5644d7b6d9-qnqxm kube-proxy-gsdzk metrics-server-74d5856cc6-wbskx storage-provisioner dashboard-metrics-scraper-d6b4b5544-mxvxx kubernetes-dashboard-84b68f675b-z674w: exit status 1 (65.218763ms)
** stderr **
Error from server (NotFound): pods "coredns-5644d7b6d9-qnqxm" not found
Error from server (NotFound): pods "kube-proxy-gsdzk" not found
Error from server (NotFound): pods "metrics-server-74d5856cc6-wbskx" not found
Error from server (NotFound): pods "storage-provisioner" not found
Error from server (NotFound): pods "dashboard-metrics-scraper-d6b4b5544-mxvxx" not found
Error from server (NotFound): pods "kubernetes-dashboard-84b68f675b-z674w" not found
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-694015 describe pod coredns-5644d7b6d9-qnqxm kube-proxy-gsdzk metrics-server-74d5856cc6-wbskx storage-provisioner dashboard-metrics-scraper-d6b4b5544-mxvxx kubernetes-dashboard-84b68f675b-z674w: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (933.21s)