=== RUN TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run: /tmp/minikube-v1.6.2.673550539.exe start -p stopped-upgrade-634233 --memory=2200 --vm-driver=kvm2
version_upgrade_test.go:196: (dbg) Non-zero exit: /tmp/minikube-v1.6.2.673550539.exe start -p stopped-upgrade-634233 --memory=2200 --vm-driver=kvm2 : exit status 70 (4.791563606s)
-- stdout --
* [stopped-upgrade-634233] minikube v1.6.2 on Ubuntu 20.04
- MINIKUBE_LOCATION=17488
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-80960/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
- KUBECONFIG=/tmp/legacy_kubeconfig1356966743
* Selecting 'kvm2' driver from user configuration (alternates: [none])
* Downloading VM boot image ...
-- /stdout --
** stderr **
! 'kvm2' driver reported an issue: /usr/bin/virsh domcapabilities --virttype kvm failed:
error: failed to get emulator capabilities
error: invalid argument: KVM is not supported by '/usr/bin/qemu-system-x86_64' on this host
* Suggestion: Follow your Linux distribution instructions for configuring KVM
* Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/
> minikube-v1.6.0.iso.sha256: 65 B / 65 B [--------------] 100.00% ? p/s 0s > minikube-v1.6.0.iso: 43.72 MiB / 150.93 MiB [--->________] 28.97% ? p/s ? > minikube-v1.6.0.iso: 43.87 MiB / 150.93 MiB [--->________] 29.07% ? p/s ? > minikube-v1.6.0.iso: 44.39 MiB / 150.93 MiB [--->________] 29.41% ? p/s ? > minikube-v1.6.0.iso: 46.31 MiB / 150.93 MiB [ 30.69% 4.33 MiB p/s ETA 24s > minikube-v1.6.0.iso: 51.81 MiB / 150.93 MiB [ 34.33% 4.33 MiB p/s ETA 22s > minikube-v1.6.0.iso: 59.02 MiB / 150.93 MiB [ 39.11% 4.33 MiB p/s ETA 21s > minikube-v1.6.0.iso: 65.81 MiB / 150.93 MiB [ 43.60% 6.14 MiB p/s ETA 13s > minikube-v1.6.0.iso: 72.81 MiB / 150.93 MiB [ 48.24% 6.14 MiB p/s ETA 12s > minikube-v1.6.0.iso: 80.10 MiB / 150.93 MiB [ 53.07% 6.14 MiB p/s ETA 11s > minikube-v1.6.0.iso: 86.97 MiB / 150.93 MiB [] 57.62% 8.02 MiB p/s ETA 7s > minikube-v1.6.0.iso: 93.32 MiB / 150.93 MiB [] 61.83% 8.02 MiB p/s ETA 7s > minikube-v1.6.0.iso: 99.96 MiB / 150.93 MiB [] 66.23% 8.02 Mi
B p/s ETA 6s > minikube-v1.6.0.iso: 107.19 MiB / 150.93 MiB [ 71.02% 9.68 MiB p/s ETA 4s > minikube-v1.6.0.iso: 113.72 MiB / 150.93 MiB [ 75.34% 9.68 MiB p/s ETA 3s > minikube-v1.6.0.iso: 121.01 MiB / 150.93 MiB [ 80.18% 9.68 MiB p/s ETA 3s > minikube-v1.6.0.iso: 127.48 MiB / 150.93 MiB 84.47% 11.24 MiB p/s ETA 2s > minikube-v1.6.0.iso: 134.74 MiB / 150.93 MiB 89.27% 11.24 MiB p/s ETA 1s > minikube-v1.6.0.iso: 141.95 MiB / 150.93 MiB 94.05% 11.24 MiB p/s ETA 0s > minikube-v1.6.0.iso: 148.48 MiB / 150.93 MiB 98.38% 12.77 MiB p/s ETA 0s > minikube-v1.6.0.iso: 150.93 MiB / 150.93 MiB [-] 100.00% 40.90 MiB p/s 4s*
X Failed to cache ISO: https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso: Failed to open file for checksum: open /home/jenkins/minikube-integration/17488-80960/.minikube/cache/iso/minikube-v1.6.0.iso.download: no such file or directory
*
* minikube is exiting due to an error. If the above message is not useful, open an issue:
- https://github.com/kubernetes/minikube/issues/new/choose
** /stderr **
version_upgrade_test.go:196: (dbg) Run: /tmp/minikube-v1.6.2.673550539.exe start -p stopped-upgrade-634233 --memory=2200 --vm-driver=kvm2
E1025 21:54:46.751093 88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/addons-245571/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.673550539.exe start -p stopped-upgrade-634233 --memory=2200 --vm-driver=kvm2 : (1m55.48473624s)
version_upgrade_test.go:205: (dbg) Run: /tmp/minikube-v1.6.2.673550539.exe -p stopped-upgrade-634233 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.673550539.exe -p stopped-upgrade-634233 stop: (13.084391404s)
version_upgrade_test.go:211: (dbg) Run: out/minikube-linux-amd64 start -p stopped-upgrade-634233 --memory=2200 --alsologtostderr -v=1 --driver=kvm2
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-634233 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : exit status 109 (15m3.493775591s)
-- stdout --
* [stopped-upgrade-634233] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=17488
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/17488-80960/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-80960/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
* Using the kvm2 driver based on existing profile
* Starting control plane node stopped-upgrade-634233 in cluster stopped-upgrade-634233
* Restarting existing kvm2 VM for "stopped-upgrade-634233" ...
* Preparing Kubernetes v1.17.0 on Docker 19.03.5 ...
* Another minikube instance is downloading dependencies...
* Another minikube instance is downloading dependencies...
* Another minikube instance is downloading dependencies...
- Generating certificates and keys ...
- Booting up control plane ...
- Generating certificates and keys ...
- Booting up control plane ...
X Problems detected in kubelet:
Oct 25 22:10:32 stopped-upgrade-634233 kubelet[1836]: E1025 22:10:32.398652 1836 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 22:10:45 stopped-upgrade-634233 kubelet[3111]: E1025 22:10:45.309286 3111 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 22:10:46 stopped-upgrade-634233 kubelet[3111]: E1025 22:10:46.288468 3111 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
-- /stdout --
** stderr **
I1025 21:55:47.017822 112102 out.go:296] Setting OutFile to fd 1 ...
I1025 21:55:47.018113 112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 21:55:47.018124 112102 out.go:309] Setting ErrFile to fd 2...
I1025 21:55:47.018129 112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 21:55:47.018335 112102 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-80960/.minikube/bin
I1025 21:55:47.018861 112102 out.go:303] Setting JSON to false
I1025 21:55:47.019852 112102 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":13082,"bootTime":1698257865,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1025 21:55:47.019912 112102 start.go:138] virtualization: kvm guest
I1025 21:55:47.022232 112102 out.go:177] * [stopped-upgrade-634233] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
I1025 21:55:47.024147 112102 notify.go:220] Checking for updates...
I1025 21:55:47.024210 112102 out.go:177] - MINIKUBE_LOCATION=17488
I1025 21:55:47.025702 112102 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1025 21:55:47.027214 112102 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/17488-80960/kubeconfig
I1025 21:55:47.028641 112102 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-80960/.minikube
I1025 21:55:47.030038 112102 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I1025 21:55:47.031589 112102 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I1025 21:55:47.033453 112102 config.go:182] Loaded profile config "stopped-upgrade-634233": Driver=, ContainerRuntime=docker, KubernetesVersion=v1.17.0
I1025 21:55:47.033468 112102 start_flags.go:689] config upgrade: Driver=kvm2
I1025 21:55:47.033476 112102 start_flags.go:701] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
I1025 21:55:47.033537 112102 profile.go:148] Saving config to /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/stopped-upgrade-634233/config.json ...
I1025 21:55:47.034127 112102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1025 21:55:47.034184 112102 main.go:141] libmachine: Launching plugin server for driver kvm2
I1025 21:55:47.048676 112102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46153
I1025 21:55:47.049134 112102 main.go:141] libmachine: () Calling .GetVersion
I1025 21:55:47.049734 112102 main.go:141] libmachine: Using API Version 1
I1025 21:55:47.049760 112102 main.go:141] libmachine: () Calling .SetConfigRaw
I1025 21:55:47.050086 112102 main.go:141] libmachine: () Calling .GetMachineName
I1025 21:55:47.050255 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .DriverName
I1025 21:55:47.052402 112102 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
I1025 21:55:47.054050 112102 driver.go:378] Setting default libvirt URI to qemu:///system
I1025 21:55:47.054342 112102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1025 21:55:47.054389 112102 main.go:141] libmachine: Launching plugin server for driver kvm2
I1025 21:55:47.070006 112102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34455
I1025 21:55:47.070441 112102 main.go:141] libmachine: () Calling .GetVersion
I1025 21:55:47.070906 112102 main.go:141] libmachine: Using API Version 1
I1025 21:55:47.070933 112102 main.go:141] libmachine: () Calling .SetConfigRaw
I1025 21:55:47.071265 112102 main.go:141] libmachine: () Calling .GetMachineName
I1025 21:55:47.071478 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .DriverName
I1025 21:55:47.108024 112102 out.go:177] * Using the kvm2 driver based on existing profile
I1025 21:55:47.109321 112102 start.go:298] selected driver: kvm2
I1025 21:55:47.109339 112102 start.go:902] validating driver "kvm2" against &{Name:stopped-upgrade-634233 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.236 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAu
thSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
I1025 21:55:47.109471 112102 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1025 21:55:47.110219 112102 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1025 21:55:47.110313 112102 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17488-80960/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1025 21:55:47.125036 112102 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
I1025 21:55:47.125395 112102 cni.go:84] Creating CNI manager for ""
I1025 21:55:47.125424 112102 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I1025 21:55:47.125439 112102 start_flags.go:323] config:
{Name:stopped-upgrade-634233 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.236 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
I1025 21:55:47.125631 112102 iso.go:125] acquiring lock: {Name:mk6659ecb6ed7b24fa2ae65bc0b8e3b5916d75e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1025 21:55:47.127455 112102 out.go:177] * Starting control plane node stopped-upgrade-634233 in cluster stopped-upgrade-634233
I1025 21:55:47.128666 112102 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
W1025 21:55:47.730774 112102 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-docker-overlay2-amd64.tar.lz4 status code: 404
I1025 21:55:47.730928 112102 profile.go:148] Saving config to /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/stopped-upgrade-634233/config.json ...
I1025 21:55:47.731029 112102 cache.go:107] acquiring lock: {Name:mk66722b0c7d0802779bb91cd665f21f019e6dde Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1025 21:55:47.731031 112102 cache.go:107] acquiring lock: {Name:mk042f89c1e87d68189138597e07a3dbc4e16f22 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1025 21:55:47.731109 112102 cache.go:107] acquiring lock: {Name:mk7732063a37da305fc0bd9f5b667d3412caf0c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1025 21:55:47.731152 112102 cache.go:115] /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
I1025 21:55:47.731158 112102 cache.go:115] /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
I1025 21:55:47.731166 112102 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 156.061µs
I1025 21:55:47.731178 112102 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
I1025 21:55:47.731176 112102 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 73.608µs
I1025 21:55:47.731185 112102 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
I1025 21:55:47.731148 112102 cache.go:107] acquiring lock: {Name:mk1871a19eccad3e50c14cd19f1f8b2380957508 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1025 21:55:47.731197 112102 cache.go:107] acquiring lock: {Name:mk4e47796820047372558a160f52936b408e80ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1025 21:55:47.731221 112102 start.go:365] acquiring machines lock for stopped-upgrade-634233: {Name:mk84b47429efad52c9c4eeca04f7cb6277d41bb4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1025 21:55:47.731234 112102 cache.go:115] /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
I1025 21:55:47.731239 112102 cache.go:115] /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
I1025 21:55:47.731228 112102 cache.go:107] acquiring lock: {Name:mk1fde4bf99dfe12b10193dcfb3fc9e08e8faf0f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1025 21:55:47.731243 112102 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 229.277µs
I1025 21:55:47.731246 112102 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 51.05µs
I1025 21:55:47.731253 112102 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
I1025 21:55:47.731255 112102 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
I1025 21:55:47.731148 112102 cache.go:107] acquiring lock: {Name:mk5461a3bb7521360d94bbac10f9d5fe42facfe0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1025 21:55:47.731198 112102 cache.go:107] acquiring lock: {Name:mk4f4fd18a02ec82da75a8b516602f12eb4877dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1025 21:55:47.731299 112102 cache.go:115] /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
I1025 21:55:47.731313 112102 cache.go:115] /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
I1025 21:55:47.731333 112102 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 220.958µs
I1025 21:55:47.731342 112102 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 224.727µs
I1025 21:55:47.731349 112102 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
I1025 21:55:47.731350 112102 cache.go:115] /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
I1025 21:55:47.731353 112102 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
I1025 21:55:47.731364 112102 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 136.053µs
I1025 21:55:47.731386 112102 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
I1025 21:55:47.731457 112102 cache.go:115] /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
I1025 21:55:47.731471 112102 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 274.751µs
I1025 21:55:47.731484 112102 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
I1025 21:55:47.731502 112102 cache.go:87] Successfully saved all images to host disk.
I1025 21:56:23.953120 112102 start.go:369] acquired machines lock for "stopped-upgrade-634233" in 36.221869178s
I1025 21:56:23.953188 112102 start.go:96] Skipping create...Using existing machine configuration
I1025 21:56:23.953201 112102 fix.go:54] fixHost starting: minikube
I1025 21:56:23.953615 112102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1025 21:56:23.953659 112102 main.go:141] libmachine: Launching plugin server for driver kvm2
I1025 21:56:23.974701 112102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34477
I1025 21:56:23.975222 112102 main.go:141] libmachine: () Calling .GetVersion
I1025 21:56:23.975839 112102 main.go:141] libmachine: Using API Version 1
I1025 21:56:23.975884 112102 main.go:141] libmachine: () Calling .SetConfigRaw
I1025 21:56:23.976409 112102 main.go:141] libmachine: () Calling .GetMachineName
I1025 21:56:23.976637 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .DriverName
I1025 21:56:23.976891 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetState
I1025 21:56:23.978772 112102 fix.go:102] recreateIfNeeded on stopped-upgrade-634233: state=Stopped err=<nil>
I1025 21:56:23.978824 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .DriverName
W1025 21:56:23.978999 112102 fix.go:128] unexpected machine state, will restart: <nil>
I1025 21:56:23.980834 112102 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-634233" ...
I1025 21:56:23.982284 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .Start
I1025 21:56:23.982687 112102 main.go:141] libmachine: (stopped-upgrade-634233) Ensuring networks are active...
I1025 21:56:23.983450 112102 main.go:141] libmachine: (stopped-upgrade-634233) Ensuring network default is active
I1025 21:56:23.983869 112102 main.go:141] libmachine: (stopped-upgrade-634233) Ensuring network minikube-net is active
I1025 21:56:23.984416 112102 main.go:141] libmachine: (stopped-upgrade-634233) Getting domain xml...
I1025 21:56:23.985278 112102 main.go:141] libmachine: (stopped-upgrade-634233) Creating domain...
I1025 21:56:25.395743 112102 main.go:141] libmachine: (stopped-upgrade-634233) Waiting to get IP...
I1025 21:56:25.397051 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined MAC address 52:54:00:26:b5:da in network minikube-net
I1025 21:56:25.397629 112102 main.go:141] libmachine: (stopped-upgrade-634233) Found IP for machine: 192.168.50.236
I1025 21:56:25.397667 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has current primary IP address 192.168.50.236 and MAC address 52:54:00:26:b5:da in network minikube-net
I1025 21:56:25.397678 112102 main.go:141] libmachine: (stopped-upgrade-634233) Reserving static IP address...
I1025 21:56:25.398326 112102 main.go:141] libmachine: (stopped-upgrade-634233) Reserved static IP address: 192.168.50.236
I1025 21:56:25.398401 112102 main.go:141] libmachine: (stopped-upgrade-634233) Waiting for SSH to be available...
I1025 21:56:25.398440 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | found host DHCP lease matching {name: "stopped-upgrade-634233", mac: "52:54:00:26:b5:da", ip: "192.168.50.236"} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-25 22:54:12 +0000 UTC Type:0 Mac:52:54:00:26:b5:da Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:stopped-upgrade-634233 Clientid:01:52:54:00:26:b5:da}
I1025 21:56:25.398479 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-634233", mac: "52:54:00:26:b5:da", ip: "192.168.50.236"}
I1025 21:56:25.398498 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | Getting to WaitForSSH function...
I1025 21:56:25.401554 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined MAC address 52:54:00:26:b5:da in network minikube-net
I1025 21:56:25.402046 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:b5:da", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-25 22:54:12 +0000 UTC Type:0 Mac:52:54:00:26:b5:da Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:stopped-upgrade-634233 Clientid:01:52:54:00:26:b5:da}
I1025 21:56:25.402076 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined IP address 192.168.50.236 and MAC address 52:54:00:26:b5:da in network minikube-net
I1025 21:56:25.402220 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | Using SSH client type: external
I1025 21:56:25.402601 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | Using SSH private key: /home/jenkins/minikube-integration/17488-80960/.minikube/machines/stopped-upgrade-634233/id_rsa (-rw-------)
I1025 21:56:25.402657 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.236 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17488-80960/.minikube/machines/stopped-upgrade-634233/id_rsa -p 22] /usr/bin/ssh <nil>}
I1025 21:56:25.402691 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | About to run SSH command:
I1025 21:56:25.402705 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | exit 0
I1025 21:56:42.549288 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | SSH cmd err, output: exit status 255:
I1025 21:56:42.549326 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | Error getting ssh command 'exit 0' : ssh command error:
I1025 21:56:42.549340 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | command : exit 0
I1025 21:56:42.549353 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | err : exit status 255
I1025 21:56:42.549367 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | output :
I1025 21:56:45.550307 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | Getting to WaitForSSH function...
I1025 21:56:45.553110 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined MAC address 52:54:00:26:b5:da in network minikube-net
I1025 21:56:45.553477 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:b5:da", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-25 22:54:12 +0000 UTC Type:0 Mac:52:54:00:26:b5:da Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:stopped-upgrade-634233 Clientid:01:52:54:00:26:b5:da}
I1025 21:56:45.553527 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined IP address 192.168.50.236 and MAC address 52:54:00:26:b5:da in network minikube-net
I1025 21:56:45.553569 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | Using SSH client type: external
I1025 21:56:45.553620 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | Using SSH private key: /home/jenkins/minikube-integration/17488-80960/.minikube/machines/stopped-upgrade-634233/id_rsa (-rw-------)
I1025 21:56:45.553652 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.236 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17488-80960/.minikube/machines/stopped-upgrade-634233/id_rsa -p 22] /usr/bin/ssh <nil>}
I1025 21:56:45.553671 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | About to run SSH command:
I1025 21:56:45.553685 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | exit 0
I1025 21:56:51.831645 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | SSH cmd err, output: <nil>:
I1025 21:56:51.832003 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetConfigRaw
I1025 21:56:51.832627 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetIP
I1025 21:56:51.834739 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined MAC address 52:54:00:26:b5:da in network minikube-net
I1025 21:56:51.835181 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:b5:da", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-25 22:56:50 +0000 UTC Type:0 Mac:52:54:00:26:b5:da Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:stopped-upgrade-634233 Clientid:01:52:54:00:26:b5:da}
I1025 21:56:51.835217 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined IP address 192.168.50.236 and MAC address 52:54:00:26:b5:da in network minikube-net
I1025 21:56:51.835468 112102 profile.go:148] Saving config to /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/stopped-upgrade-634233/config.json ...
I1025 21:56:51.835686 112102 machine.go:88] provisioning docker machine ...
I1025 21:56:51.835711 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .DriverName
I1025 21:56:51.835926 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetMachineName
I1025 21:56:51.836099 112102 buildroot.go:166] provisioning hostname "stopped-upgrade-634233"
I1025 21:56:51.836123 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetMachineName
I1025 21:56:51.836269 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHHostname
I1025 21:56:51.838355 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined MAC address 52:54:00:26:b5:da in network minikube-net
I1025 21:56:51.838765 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:b5:da", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-25 22:56:50 +0000 UTC Type:0 Mac:52:54:00:26:b5:da Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:stopped-upgrade-634233 Clientid:01:52:54:00:26:b5:da}
I1025 21:56:51.838808 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined IP address 192.168.50.236 and MAC address 52:54:00:26:b5:da in network minikube-net
I1025 21:56:51.838985 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHPort
I1025 21:56:51.839186 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHKeyPath
I1025 21:56:51.839374 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHKeyPath
I1025 21:56:51.839577 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHUsername
I1025 21:56:51.839816 112102 main.go:141] libmachine: Using SSH client type: native
I1025 21:56:51.840342 112102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil> [] 0s} 192.168.50.236 22 <nil> <nil>}
I1025 21:56:51.840360 112102 main.go:141] libmachine: About to run SSH command:
sudo hostname stopped-upgrade-634233 && echo "stopped-upgrade-634233" | sudo tee /etc/hostname
I1025 21:56:51.974781 112102 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-634233
I1025 21:56:51.974814 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHHostname
I1025 21:56:51.977831 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined MAC address 52:54:00:26:b5:da in network minikube-net
I1025 21:56:51.978205 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:b5:da", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-25 22:56:50 +0000 UTC Type:0 Mac:52:54:00:26:b5:da Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:stopped-upgrade-634233 Clientid:01:52:54:00:26:b5:da}
I1025 21:56:51.978237 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined IP address 192.168.50.236 and MAC address 52:54:00:26:b5:da in network minikube-net
I1025 21:56:51.978418 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHPort
I1025 21:56:51.978632 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHKeyPath
I1025 21:56:51.978779 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHKeyPath
I1025 21:56:51.978992 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHUsername
I1025 21:56:51.979156 112102 main.go:141] libmachine: Using SSH client type: native
I1025 21:56:51.979469 112102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil> [] 0s} 192.168.50.236 22 <nil> <nil>}
I1025 21:56:51.979489 112102 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sstopped-upgrade-634233' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-634233/g' /etc/hosts;
else
echo '127.0.1.1 stopped-upgrade-634233' | sudo tee -a /etc/hosts;
fi
fi
I1025 21:56:52.113234 112102 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1025 21:56:52.113266 112102 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17488-80960/.minikube CaCertPath:/home/jenkins/minikube-integration/17488-80960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17488-80960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17488-80960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17488-80960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17488-80960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17488-80960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17488-80960/.minikube}
I1025 21:56:52.113291 112102 buildroot.go:174] setting up certificates
I1025 21:56:52.113311 112102 provision.go:83] configureAuth start
I1025 21:56:52.113349 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetMachineName
I1025 21:56:52.113594 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetIP
I1025 21:56:52.116350 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined MAC address 52:54:00:26:b5:da in network minikube-net
I1025 21:56:52.116826 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:b5:da", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-25 22:56:50 +0000 UTC Type:0 Mac:52:54:00:26:b5:da Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:stopped-upgrade-634233 Clientid:01:52:54:00:26:b5:da}
I1025 21:56:52.116858 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined IP address 192.168.50.236 and MAC address 52:54:00:26:b5:da in network minikube-net
I1025 21:56:52.117056 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHHostname
I1025 21:56:52.119304 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined MAC address 52:54:00:26:b5:da in network minikube-net
I1025 21:56:52.119727 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:b5:da", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-25 22:56:50 +0000 UTC Type:0 Mac:52:54:00:26:b5:da Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:stopped-upgrade-634233 Clientid:01:52:54:00:26:b5:da}
I1025 21:56:52.119770 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined IP address 192.168.50.236 and MAC address 52:54:00:26:b5:da in network minikube-net
I1025 21:56:52.119895 112102 provision.go:138] copyHostCerts
I1025 21:56:52.119954 112102 exec_runner.go:144] found /home/jenkins/minikube-integration/17488-80960/.minikube/cert.pem, removing ...
I1025 21:56:52.119981 112102 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17488-80960/.minikube/cert.pem
I1025 21:56:52.120075 112102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-80960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17488-80960/.minikube/cert.pem (1123 bytes)
I1025 21:56:52.120193 112102 exec_runner.go:144] found /home/jenkins/minikube-integration/17488-80960/.minikube/key.pem, removing ...
I1025 21:56:52.120204 112102 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17488-80960/.minikube/key.pem
I1025 21:56:52.120253 112102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-80960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17488-80960/.minikube/key.pem (1679 bytes)
I1025 21:56:52.120308 112102 exec_runner.go:144] found /home/jenkins/minikube-integration/17488-80960/.minikube/ca.pem, removing ...
I1025 21:56:52.120316 112102 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17488-80960/.minikube/ca.pem
I1025 21:56:52.120339 112102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-80960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17488-80960/.minikube/ca.pem (1082 bytes)
I1025 21:56:52.120380 112102 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17488-80960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17488-80960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17488-80960/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-634233 san=[192.168.50.236 192.168.50.236 localhost 127.0.0.1 minikube stopped-upgrade-634233]
I1025 21:56:52.193166 112102 provision.go:172] copyRemoteCerts
I1025 21:56:52.193236 112102 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1025 21:56:52.193262 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHHostname
I1025 21:56:52.195941 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined MAC address 52:54:00:26:b5:da in network minikube-net
I1025 21:56:52.196210 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:b5:da", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-25 22:56:50 +0000 UTC Type:0 Mac:52:54:00:26:b5:da Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:stopped-upgrade-634233 Clientid:01:52:54:00:26:b5:da}
I1025 21:56:52.196263 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined IP address 192.168.50.236 and MAC address 52:54:00:26:b5:da in network minikube-net
I1025 21:56:52.196397 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHPort
I1025 21:56:52.196581 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHKeyPath
I1025 21:56:52.196757 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHUsername
I1025 21:56:52.196908 112102 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17488-80960/.minikube/machines/stopped-upgrade-634233/id_rsa Username:docker}
I1025 21:56:52.286373 112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1025 21:56:52.301342 112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
I1025 21:56:52.315903 112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1025 21:56:52.330603 112102 provision.go:86] duration metric: configureAuth took 217.274705ms
I1025 21:56:52.330639 112102 buildroot.go:189] setting minikube options for container-runtime
I1025 21:56:52.330823 112102 config.go:182] Loaded profile config "stopped-upgrade-634233": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.17.0
I1025 21:56:52.330853 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .DriverName
I1025 21:56:52.331175 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHHostname
I1025 21:56:52.334359 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined MAC address 52:54:00:26:b5:da in network minikube-net
I1025 21:56:52.334777 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:b5:da", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-25 22:56:50 +0000 UTC Type:0 Mac:52:54:00:26:b5:da Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:stopped-upgrade-634233 Clientid:01:52:54:00:26:b5:da}
I1025 21:56:52.334817 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined IP address 192.168.50.236 and MAC address 52:54:00:26:b5:da in network minikube-net
I1025 21:56:52.335002 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHPort
I1025 21:56:52.335230 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHKeyPath
I1025 21:56:52.335424 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHKeyPath
I1025 21:56:52.335606 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHUsername
I1025 21:56:52.335799 112102 main.go:141] libmachine: Using SSH client type: native
I1025 21:56:52.336120 112102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil> [] 0s} 192.168.50.236 22 <nil> <nil>}
I1025 21:56:52.336131 112102 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1025 21:56:52.466037 112102 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I1025 21:56:52.466067 112102 buildroot.go:70] root file system type: tmpfs
I1025 21:56:52.466195 112102 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I1025 21:56:52.466226 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHHostname
I1025 21:56:52.469064 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined MAC address 52:54:00:26:b5:da in network minikube-net
I1025 21:56:52.469445 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:b5:da", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-25 22:56:50 +0000 UTC Type:0 Mac:52:54:00:26:b5:da Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:stopped-upgrade-634233 Clientid:01:52:54:00:26:b5:da}
I1025 21:56:52.469481 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined IP address 192.168.50.236 and MAC address 52:54:00:26:b5:da in network minikube-net
I1025 21:56:52.469657 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHPort
I1025 21:56:52.469861 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHKeyPath
I1025 21:56:52.470050 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHKeyPath
I1025 21:56:52.470197 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHUsername
I1025 21:56:52.470339 112102 main.go:141] libmachine: Using SSH client type: native
I1025 21:56:52.470717 112102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil> [] 0s} 192.168.50.236 22 <nil> <nil>}
I1025 21:56:52.470819 112102 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1025 21:56:52.607454 112102 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I1025 21:56:52.607489 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHHostname
I1025 21:56:52.610280 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined MAC address 52:54:00:26:b5:da in network minikube-net
I1025 21:56:52.610690 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:b5:da", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-25 22:56:50 +0000 UTC Type:0 Mac:52:54:00:26:b5:da Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:stopped-upgrade-634233 Clientid:01:52:54:00:26:b5:da}
I1025 21:56:52.610728 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined IP address 192.168.50.236 and MAC address 52:54:00:26:b5:da in network minikube-net
I1025 21:56:52.610871 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHPort
I1025 21:56:52.611081 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHKeyPath
I1025 21:56:52.611238 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHKeyPath
I1025 21:56:52.611482 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHUsername
I1025 21:56:52.611725 112102 main.go:141] libmachine: Using SSH client type: native
I1025 21:56:52.612045 112102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil> [] 0s} 192.168.50.236 22 <nil> <nil>}
I1025 21:56:52.612063 112102 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1025 21:56:53.439675 112102 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I1025 21:56:53.439706 112102 machine.go:91] provisioned docker machine in 1.604003976s
I1025 21:56:53.439717 112102 start.go:300] post-start starting for "stopped-upgrade-634233" (driver="kvm2")
I1025 21:56:53.439740 112102 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1025 21:56:53.439763 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .DriverName
I1025 21:56:53.440106 112102 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1025 21:56:53.440142 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHHostname
I1025 21:56:53.442783 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined MAC address 52:54:00:26:b5:da in network minikube-net
I1025 21:56:53.443143 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:b5:da", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-25 22:56:50 +0000 UTC Type:0 Mac:52:54:00:26:b5:da Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:stopped-upgrade-634233 Clientid:01:52:54:00:26:b5:da}
I1025 21:56:53.443187 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined IP address 192.168.50.236 and MAC address 52:54:00:26:b5:da in network minikube-net
I1025 21:56:53.443387 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHPort
I1025 21:56:53.443609 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHKeyPath
I1025 21:56:53.443809 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHUsername
I1025 21:56:53.443954 112102 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17488-80960/.minikube/machines/stopped-upgrade-634233/id_rsa Username:docker}
I1025 21:56:53.538118 112102 ssh_runner.go:195] Run: cat /etc/os-release
I1025 21:56:53.543631 112102 info.go:137] Remote host: Buildroot 2019.02.7
I1025 21:56:53.543668 112102 filesync.go:126] Scanning /home/jenkins/minikube-integration/17488-80960/.minikube/addons for local assets ...
I1025 21:56:53.543767 112102 filesync.go:126] Scanning /home/jenkins/minikube-integration/17488-80960/.minikube/files for local assets ...
I1025 21:56:53.543924 112102 filesync.go:149] local asset: /home/jenkins/minikube-integration/17488-80960/.minikube/files/etc/ssl/certs/882442.pem -> 882442.pem in /etc/ssl/certs
I1025 21:56:53.544111 112102 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1025 21:56:53.550794 112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/files/etc/ssl/certs/882442.pem --> /etc/ssl/certs/882442.pem (1708 bytes)
I1025 21:56:53.565050 112102 start.go:303] post-start completed in 125.318366ms
I1025 21:56:53.565073 112102 fix.go:56] fixHost completed within 29.611873312s
I1025 21:56:53.565100 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHHostname
I1025 21:56:53.567970 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined MAC address 52:54:00:26:b5:da in network minikube-net
I1025 21:56:53.568408 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:b5:da", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-25 22:56:50 +0000 UTC Type:0 Mac:52:54:00:26:b5:da Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:stopped-upgrade-634233 Clientid:01:52:54:00:26:b5:da}
I1025 21:56:53.568444 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined IP address 192.168.50.236 and MAC address 52:54:00:26:b5:da in network minikube-net
I1025 21:56:53.568657 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHPort
I1025 21:56:53.568893 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHKeyPath
I1025 21:56:53.569097 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHKeyPath
I1025 21:56:53.569240 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHUsername
I1025 21:56:53.569449 112102 main.go:141] libmachine: Using SSH client type: native
I1025 21:56:53.569940 112102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil> [] 0s} 192.168.50.236 22 <nil> <nil>}
I1025 21:56:53.569957 112102 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I1025 21:56:53.701183 112102 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698271013.644529385
I1025 21:56:53.701210 112102 fix.go:206] guest clock: 1698271013.644529385
I1025 21:56:53.701220 112102 fix.go:219] Guest: 2023-10-25 21:56:53.644529385 +0000 UTC Remote: 2023-10-25 21:56:53.565077837 +0000 UTC m=+66.606241784 (delta=79.451548ms)
I1025 21:56:53.701267 112102 fix.go:190] guest clock delta is within tolerance: 79.451548ms
I1025 21:56:53.701275 112102 start.go:83] releasing machines lock for "stopped-upgrade-634233", held for 29.74812345s
I1025 21:56:53.701313 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .DriverName
I1025 21:56:53.701617 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetIP
I1025 21:56:53.704359 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined MAC address 52:54:00:26:b5:da in network minikube-net
I1025 21:56:53.704791 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:b5:da", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-25 22:56:50 +0000 UTC Type:0 Mac:52:54:00:26:b5:da Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:stopped-upgrade-634233 Clientid:01:52:54:00:26:b5:da}
I1025 21:56:53.704820 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined IP address 192.168.50.236 and MAC address 52:54:00:26:b5:da in network minikube-net
I1025 21:56:53.705063 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .DriverName
I1025 21:56:53.705702 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .DriverName
I1025 21:56:53.705916 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .DriverName
I1025 21:56:53.706029 112102 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1025 21:56:53.706073 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHHostname
I1025 21:56:53.706131 112102 ssh_runner.go:195] Run: cat /version.json
I1025 21:56:53.706172 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHHostname
I1025 21:56:53.708989 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined MAC address 52:54:00:26:b5:da in network minikube-net
I1025 21:56:53.709397 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined MAC address 52:54:00:26:b5:da in network minikube-net
I1025 21:56:53.709444 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:b5:da", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-25 22:56:50 +0000 UTC Type:0 Mac:52:54:00:26:b5:da Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:stopped-upgrade-634233 Clientid:01:52:54:00:26:b5:da}
I1025 21:56:53.709469 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined IP address 192.168.50.236 and MAC address 52:54:00:26:b5:da in network minikube-net
I1025 21:56:53.709533 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHPort
I1025 21:56:53.709670 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHKeyPath
I1025 21:56:53.709833 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHUsername
I1025 21:56:53.709854 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:b5:da", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-25 22:56:50 +0000 UTC Type:0 Mac:52:54:00:26:b5:da Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:stopped-upgrade-634233 Clientid:01:52:54:00:26:b5:da}
I1025 21:56:53.709881 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined IP address 192.168.50.236 and MAC address 52:54:00:26:b5:da in network minikube-net
I1025 21:56:53.709975 112102 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17488-80960/.minikube/machines/stopped-upgrade-634233/id_rsa Username:docker}
I1025 21:56:53.710091 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHPort
I1025 21:56:53.710231 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHKeyPath
I1025 21:56:53.710353 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHUsername
I1025 21:56:53.710504 112102 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17488-80960/.minikube/machines/stopped-upgrade-634233/id_rsa Username:docker}
W1025 21:56:53.827258 112102 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
stdout:
stderr:
cat: /version.json: No such file or directory
I1025 21:56:53.827343 112102 ssh_runner.go:195] Run: systemctl --version
I1025 21:56:53.832682 112102 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1025 21:56:53.838192 112102 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1025 21:56:53.838285 112102 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
I1025 21:56:53.845857 112102 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
I1025 21:56:53.852492 112102 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
I1025 21:56:53.852515 112102 start.go:472] detecting cgroup driver to use...
I1025 21:56:53.852659 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1025 21:56:53.866108 112102 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
I1025 21:56:53.872964 112102 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1025 21:56:53.880686 112102 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I1025 21:56:53.880748 112102 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1025 21:56:53.889458 112102 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1025 21:56:53.896158 112102 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1025 21:56:53.904005 112102 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1025 21:56:53.910964 112102 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1025 21:56:53.920299 112102 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1025 21:56:53.927421 112102 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1025 21:56:53.934472 112102 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1025 21:56:53.940872 112102 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1025 21:56:54.018241 112102 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1025 21:56:54.034541 112102 start.go:472] detecting cgroup driver to use...
I1025 21:56:54.034622 112102 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1025 21:56:54.051519 112102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1025 21:56:54.062873 112102 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1025 21:56:54.077877 112102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1025 21:56:54.087857 112102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1025 21:56:54.100985 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I1025 21:56:54.114086 112102 ssh_runner.go:195] Run: which cri-dockerd
I1025 21:56:54.118655 112102 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1025 21:56:54.125016 112102 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I1025 21:56:54.136461 112102 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1025 21:56:54.229546 112102 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1025 21:56:54.322360 112102 docker.go:555] configuring docker to use "cgroupfs" as cgroup driver...
I1025 21:56:54.322525 112102 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I1025 21:56:54.334578 112102 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1025 21:56:54.425295 112102 ssh_runner.go:195] Run: sudo systemctl restart docker
I1025 21:56:55.861359 112102 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.435994986s)
I1025 21:56:55.861436 112102 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1025 21:56:55.911261 112102 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1025 21:56:55.968056 112102 out.go:204] * Preparing Kubernetes v1.17.0 on Docker 19.03.5 ...
I1025 21:56:55.968102 112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetIP
I1025 21:56:55.971517 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined MAC address 52:54:00:26:b5:da in network minikube-net
I1025 21:56:55.972202 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:b5:da", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-25 22:56:50 +0000 UTC Type:0 Mac:52:54:00:26:b5:da Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:stopped-upgrade-634233 Clientid:01:52:54:00:26:b5:da}
I1025 21:56:55.972299 112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined IP address 192.168.50.236 and MAC address 52:54:00:26:b5:da in network minikube-net
I1025 21:56:55.972477 112102 ssh_runner.go:195] Run: grep 192.168.50.1 host.minikube.internal$ /etc/hosts
I1025 21:56:55.976368 112102 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1025 21:56:55.985456 112102 localpath.go:92] copying /home/jenkins/minikube-integration/17488-80960/.minikube/client.crt -> /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/stopped-upgrade-634233/client.crt
I1025 21:56:55.985636 112102 localpath.go:117] copying /home/jenkins/minikube-integration/17488-80960/.minikube/client.key -> /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/stopped-upgrade-634233/client.key
I1025 21:56:55.985781 112102 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
I1025 21:56:55.985835 112102 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1025 21:56:56.021405 112102 docker.go:693] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.17.0
k8s.gcr.io/kube-controller-manager:v1.17.0
k8s.gcr.io/kube-apiserver:v1.17.0
k8s.gcr.io/kube-scheduler:v1.17.0
kubernetesui/dashboard:v2.0.0-beta8
k8s.gcr.io/coredns:1.6.5
k8s.gcr.io/etcd:3.4.3-0
kubernetesui/metrics-scraper:v1.0.2
k8s.gcr.io/kube-addon-manager:v9.0.2
k8s.gcr.io/pause:3.1
gcr.io/k8s-minikube/storage-provisioner:v1.8.1
-- /stdout --
I1025 21:56:56.021435 112102 docker.go:699] registry.k8s.io/kube-apiserver:v1.17.0 wasn't preloaded
I1025 21:56:56.021446 112102 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.17.0 registry.k8s.io/kube-controller-manager:v1.17.0 registry.k8s.io/kube-scheduler:v1.17.0 registry.k8s.io/kube-proxy:v1.17.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.5 gcr.io/k8s-minikube/storage-provisioner:v5]
I1025 21:56:56.022934 112102 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
I1025 21:56:56.022963 112102 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
I1025 21:56:56.023010 112102 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
I1025 21:56:56.023162 112102 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
I1025 21:56:56.023215 112102 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
I1025 21:56:56.023221 112102 image.go:134] retrieving image: registry.k8s.io/pause:3.1
I1025 21:56:56.023258 112102 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
I1025 21:56:56.023264 112102 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I1025 21:56:56.023720 112102 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
I1025 21:56:56.023729 112102 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
I1025 21:56:56.023785 112102 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
I1025 21:56:56.024012 112102 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I1025 21:56:56.024021 112102 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
I1025 21:56:56.024041 112102 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
I1025 21:56:56.024090 112102 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
I1025 21:56:56.024081 112102 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
I1025 21:56:56.187246 112102 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
I1025 21:56:56.194418 112102 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
I1025 21:56:56.215246 112102 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.5
I1025 21:56:56.235944 112102 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
I1025 21:56:56.235997 112102 docker.go:318] Removing image: registry.k8s.io/pause:3.1
I1025 21:56:56.236038 112102 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
I1025 21:56:56.267373 112102 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.17.0
I1025 21:56:56.279019 112102 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
I1025 21:56:56.279074 112102 docker.go:318] Removing image: registry.k8s.io/etcd:3.4.3-0
I1025 21:56:56.279122 112102 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
I1025 21:56:56.289059 112102 cache_images.go:116] "registry.k8s.io/coredns:1.6.5" needs transfer: "registry.k8s.io/coredns:1.6.5" does not exist at hash "70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61" in container runtime
I1025 21:56:56.289125 112102 docker.go:318] Removing image: registry.k8s.io/coredns:1.6.5
I1025 21:56:56.289231 112102 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.5
I1025 21:56:56.304720 112102 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
I1025 21:56:56.304829 112102 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.1
I1025 21:56:56.371830 112102 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.17.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.17.0" does not exist at hash "0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2" in container runtime
I1025 21:56:56.371926 112102 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
I1025 21:56:56.371955 112102 docker.go:318] Removing image: registry.k8s.io/kube-apiserver:v1.17.0
I1025 21:56:56.372027 112102 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.17.0
I1025 21:56:56.372027 112102 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.4.3-0
I1025 21:56:56.383426 112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 --> /var/lib/minikube/images/pause_3.1 (318976 bytes)
I1025 21:56:56.383439 112102 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5
I1025 21:56:56.383677 112102 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_1.6.5
I1025 21:56:56.394357 112102 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.17.0
I1025 21:56:56.413718 112102 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.17.0
I1025 21:56:56.433539 112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 --> /var/lib/minikube/images/etcd_3.4.3-0 (100950016 bytes)
I1025 21:56:56.433630 112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 --> /var/lib/minikube/images/coredns_1.6.5 (13241856 bytes)
I1025 21:56:56.433562 112102 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0
I1025 21:56:56.433819 112102 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.17.0
I1025 21:56:56.497573 112102 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.17.0
I1025 21:56:56.514219 112102 docker.go:285] Loading image: /var/lib/minikube/images/pause_3.1
I1025 21:56:56.514651 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.1 | docker load"
I1025 21:56:56.521644 112102 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.17.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.17.0" does not exist at hash "5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056" in container runtime
I1025 21:56:56.521698 112102 docker.go:318] Removing image: registry.k8s.io/kube-controller-manager:v1.17.0
I1025 21:56:56.521753 112102 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.17.0
I1025 21:56:56.521830 112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 --> /var/lib/minikube/images/kube-apiserver_v1.17.0 (50629632 bytes)
I1025 21:56:56.522438 112102 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.17.0" needs transfer: "registry.k8s.io/kube-proxy:v1.17.0" does not exist at hash "7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19" in container runtime
I1025 21:56:56.522503 112102 docker.go:318] Removing image: registry.k8s.io/kube-proxy:v1.17.0
I1025 21:56:56.522561 112102 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.17.0
I1025 21:56:56.681267 112102 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.17.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.17.0" does not exist at hash "78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28" in container runtime
I1025 21:56:56.681325 112102 docker.go:318] Removing image: registry.k8s.io/kube-scheduler:v1.17.0
I1025 21:56:56.681417 112102 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.17.0
I1025 21:56:56.794790 112102 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
I1025 21:56:56.794912 112102 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0
I1025 21:56:56.795011 112102 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0
I1025 21:56:56.795043 112102 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.17.0
I1025 21:56:56.795212 112102 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.17.0
I1025 21:56:56.807630 112102 docker.go:285] Loading image: /var/lib/minikube/images/coredns_1.6.5
I1025 21:56:56.807663 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_1.6.5 | docker load"
I1025 21:56:56.854516 112102 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0
I1025 21:56:56.854637 112102 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.17.0
I1025 21:56:56.864936 112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 --> /var/lib/minikube/images/kube-proxy_v1.17.0 (48705536 bytes)
I1025 21:56:56.865220 112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 --> /var/lib/minikube/images/kube-controller-manager_v1.17.0 (48791552 bytes)
I1025 21:56:57.243248 112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 --> /var/lib/minikube/images/kube-scheduler_v1.17.0 (33822208 bytes)
I1025 21:56:57.243557 112102 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 from cache
I1025 21:56:57.607528 112102 docker.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.17.0
I1025 21:56:57.607565 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.17.0 | docker load"
I1025 21:56:58.049506 112102 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
I1025 21:56:58.849331 112102 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.17.0 | docker load": (1.241739101s)
I1025 21:56:58.849370 112102 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 from cache
I1025 21:56:58.849393 112102 docker.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.17.0
I1025 21:56:58.849409 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.17.0 | docker load"
I1025 21:56:58.849432 112102 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
I1025 21:56:58.849475 112102 docker.go:318] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
I1025 21:56:58.849538 112102 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
I1025 21:56:59.240594 112102 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 from cache
I1025 21:56:59.240641 112102 docker.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.17.0
I1025 21:56:59.240659 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.17.0 | docker load"
I1025 21:56:59.240695 112102 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
I1025 21:56:59.240798 112102 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
I1025 21:56:59.582335 112102 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 from cache
I1025 21:56:59.582399 112102 docker.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.17.0
I1025 21:56:59.582419 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.17.0 | docker load"
I1025 21:56:59.582564 112102 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
I1025 21:56:59.582643 112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
I1025 21:56:59.846695 112102 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 from cache
I1025 21:56:59.846745 112102 docker.go:285] Loading image: /var/lib/minikube/images/etcd_3.4.3-0
I1025 21:56:59.846766 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.4.3-0 | docker load"
I1025 21:57:00.360525 112102 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 from cache
I1025 21:57:00.360568 112102 docker.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
I1025 21:57:00.360587 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
I1025 21:57:00.989530 112102 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
I1025 21:57:00.989577 112102 cache_images.go:123] Successfully loaded all cached images
I1025 21:57:00.989586 112102 cache_images.go:92] LoadImages completed in 4.968121671s
I1025 21:57:00.989653 112102 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I1025 21:57:01.042043 112102 cni.go:84] Creating CNI manager for ""
I1025 21:57:01.042070 112102 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I1025 21:57:01.042093 112102 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I1025 21:57:01.042130 112102 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.236 APIServerPort:8443 KubernetesVersion:v1.17.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-634233 NodeName:stopped-upgrade-634233 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.236"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.236 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I1025 21:57:01.042323 112102 kubeadm.go:181] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.50.236
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "stopped-upgrade-634233"
kubeletExtraArgs:
node-ip: 192.168.50.236
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.50.236"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.17.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1025 21:57:01.042433 112102 kubeadm.go:976] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.17.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=stopped-upgrade-634233 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.236
[Install]
config:
{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I1025 21:57:01.042515 112102 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.17.0
I1025 21:57:01.049616 112102 binaries.go:47] Didn't find k8s binaries: didn't find preexisting kubectl
Initiating transfer...
I1025 21:57:01.049678 112102 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.17.0
I1025 21:57:01.158844 112102 out.go:204] * Another minikube instance is downloading dependencies...
I1025 21:57:01.160320 112102 out.go:204] * Another minikube instance is downloading dependencies...
I1025 21:57:01.161750 112102 out.go:204] * Another minikube instance is downloading dependencies...
I1025 21:57:05.231603 112102 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.17.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.17.0/bin/linux/amd64/kubeadm.sha256
I1025 21:57:05.231737 112102 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubeadm
I1025 21:57:05.237439 112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/cache/linux/amd64/v1.17.0/kubeadm --> /var/lib/minikube/binaries/v1.17.0/kubeadm (39342080 bytes)
I1025 21:57:09.047680 112102 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.17.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.17.0/bin/linux/amd64/kubectl.sha256
I1025 21:57:09.047835 112102 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubectl
I1025 21:57:09.058238 112102 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.17.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubectl: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubectl': No such file or directory
I1025 21:57:09.058288 112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/cache/linux/amd64/v1.17.0/kubectl --> /var/lib/minikube/binaries/v1.17.0/kubectl (43495424 bytes)
I1025 21:57:50.042000 112102 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.17.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.17.0/bin/linux/amd64/kubelet.sha256
I1025 21:57:50.042056 112102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1025 21:57:50.056063 112102 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubelet
I1025 21:57:50.061943 112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/cache/linux/amd64/v1.17.0/kubelet --> /var/lib/minikube/binaries/v1.17.0/kubelet (111560216 bytes)
I1025 21:57:50.434826 112102 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1025 21:57:50.442229 112102 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (350 bytes)
I1025 21:57:50.454229 112102 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1025 21:57:50.467080 112102 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
I1025 21:57:50.479416 112102 ssh_runner.go:195] Run: grep 192.168.50.236 control-plane.minikube.internal$ /etc/hosts
I1025 21:57:50.484166 112102 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.236 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1025 21:57:50.495919 112102 certs.go:56] Setting up /home/jenkins/minikube-integration/17488-80960/.minikube/profiles for IP: 192.168.50.236
I1025 21:57:50.495957 112102 certs.go:190] acquiring lock for shared ca certs: {Name:mk95bc4bbfee71bbd045d1866d072591cdac4e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1025 21:57:50.496130 112102 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17488-80960/.minikube/ca.key
I1025 21:57:50.496186 112102 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17488-80960/.minikube/proxy-client-ca.key
I1025 21:57:50.496260 112102 localpath.go:92] copying /home/jenkins/minikube-integration/17488-80960/.minikube/client.crt -> /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/client.crt
I1025 21:57:50.496411 112102 localpath.go:117] copying /home/jenkins/minikube-integration/17488-80960/.minikube/client.key -> /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/client.key
I1025 21:57:50.496567 112102 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/client.key
I1025 21:57:50.496587 112102 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/apiserver.key.4e4dee8d
I1025 21:57:50.496614 112102 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/apiserver.crt.4e4dee8d with IP's: [192.168.50.236 10.96.0.1 127.0.0.1 10.0.0.1]
I1025 21:57:50.609353 112102 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/apiserver.crt.4e4dee8d ...
I1025 21:57:50.609393 112102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/apiserver.crt.4e4dee8d: {Name:mke3274006c59a51371ddf063e61cd3592fc8795 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1025 21:57:50.609622 112102 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/apiserver.key.4e4dee8d ...
I1025 21:57:50.609643 112102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/apiserver.key.4e4dee8d: {Name:mkbd85aa1d488ace7ab0f78dacaf385c02ef80a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1025 21:57:50.609778 112102 certs.go:337] copying /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/apiserver.crt.4e4dee8d -> /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/apiserver.crt
I1025 21:57:50.609888 112102 certs.go:341] copying /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/apiserver.key.4e4dee8d -> /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/apiserver.key
I1025 21:57:50.609969 112102 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/proxy-client.key
I1025 21:57:50.609996 112102 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/proxy-client.crt with IP's: []
I1025 21:57:51.008579 112102 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/proxy-client.crt ...
I1025 21:57:51.008610 112102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/proxy-client.crt: {Name:mk97cda8597bd8dd0454f5b34698d39e84de7a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1025 21:57:51.008781 112102 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/proxy-client.key ...
I1025 21:57:51.008799 112102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/proxy-client.key: {Name:mkf3d8f627f135177b5d2c5f8f8b6aa33103aeaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1025 21:57:51.009029 112102 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-80960/.minikube/certs/home/jenkins/minikube-integration/17488-80960/.minikube/certs/88244.pem (1338 bytes)
W1025 21:57:51.009082 112102 certs.go:433] ignoring /home/jenkins/minikube-integration/17488-80960/.minikube/certs/home/jenkins/minikube-integration/17488-80960/.minikube/certs/88244_empty.pem, impossibly tiny 0 bytes
I1025 21:57:51.009099 112102 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-80960/.minikube/certs/home/jenkins/minikube-integration/17488-80960/.minikube/certs/ca-key.pem (1679 bytes)
I1025 21:57:51.009130 112102 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-80960/.minikube/certs/home/jenkins/minikube-integration/17488-80960/.minikube/certs/ca.pem (1082 bytes)
I1025 21:57:51.009159 112102 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-80960/.minikube/certs/home/jenkins/minikube-integration/17488-80960/.minikube/certs/cert.pem (1123 bytes)
I1025 21:57:51.009198 112102 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-80960/.minikube/certs/home/jenkins/minikube-integration/17488-80960/.minikube/certs/key.pem (1679 bytes)
I1025 21:57:51.009266 112102 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-80960/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17488-80960/.minikube/files/etc/ssl/certs/882442.pem (1708 bytes)
I1025 21:57:51.009895 112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I1025 21:57:51.027269 112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1025 21:57:51.042003 112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1025 21:57:51.056896 112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1025 21:57:51.073090 112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1025 21:57:51.088824 112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1025 21:57:51.105095 112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1025 21:57:51.120739 112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1025 21:57:51.135554 112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/files/etc/ssl/certs/882442.pem --> /usr/share/ca-certificates/882442.pem (1708 bytes)
I1025 21:57:51.149734 112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1025 21:57:51.167049 112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/certs/88244.pem --> /usr/share/ca-certificates/88244.pem (1338 bytes)
I1025 21:57:51.181709 112102 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (774 bytes)
I1025 21:57:51.192139 112102 ssh_runner.go:195] Run: openssl version
I1025 21:57:51.198978 112102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/882442.pem && ln -fs /usr/share/ca-certificates/882442.pem /etc/ssl/certs/882442.pem"
I1025 21:57:51.207819 112102 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/882442.pem
I1025 21:57:51.213258 112102 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 25 21:19 /usr/share/ca-certificates/882442.pem
I1025 21:57:51.213318 112102 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/882442.pem
I1025 21:57:51.228434 112102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/882442.pem /etc/ssl/certs/3ec20f2e.0"
I1025 21:57:51.238181 112102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1025 21:57:51.247027 112102 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1025 21:57:51.253672 112102 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 25 21:13 /usr/share/ca-certificates/minikubeCA.pem
I1025 21:57:51.253728 112102 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1025 21:57:51.266906 112102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1025 21:57:51.274400 112102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88244.pem && ln -fs /usr/share/ca-certificates/88244.pem /etc/ssl/certs/88244.pem"
I1025 21:57:51.282082 112102 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88244.pem
I1025 21:57:51.287987 112102 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 25 21:19 /usr/share/ca-certificates/88244.pem
I1025 21:57:51.288040 112102 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88244.pem
I1025 21:57:51.300811 112102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88244.pem /etc/ssl/certs/51391683.0"
I1025 21:57:51.308229 112102 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
I1025 21:57:51.312672 112102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I1025 21:57:51.328249 112102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I1025 21:57:51.343032 112102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I1025 21:57:51.355651 112102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I1025 21:57:51.367489 112102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I1025 21:57:51.378975 112102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I1025 21:57:51.391334 112102 kubeadm.go:404] StartCluster: {Name:stopped-upgrade-634233 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVe
rsion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.236 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
I1025 21:57:51.391478 112102 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1025 21:57:51.430350 112102 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1025 21:57:51.439323 112102 kubeadm.go:419] found existing configuration files, will attempt cluster restart
I1025 21:57:51.439351 112102 kubeadm.go:636] restartCluster start
I1025 21:57:51.439399 112102 ssh_runner.go:195] Run: sudo test -d /data/minikube
I1025 21:57:51.445868 112102 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I1025 21:57:51.446460 112102 kubeconfig.go:135] verify returned: extract IP: "stopped-upgrade-634233" does not appear in /home/jenkins/minikube-integration/17488-80960/kubeconfig
I1025 21:57:51.446615 112102 kubeconfig.go:146] "stopped-upgrade-634233" context is missing from /home/jenkins/minikube-integration/17488-80960/kubeconfig - will repair!
I1025 21:57:51.446998 112102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-80960/kubeconfig: {Name:mk4723f12542c40c1c944f4b4dc7af3f0a23b0b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1025 21:57:51.447831 112102 kapi.go:59] client config for stopped-upgrade-634233: &rest.Config{Host:"https://192.168.50.236:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17488-80960/.minikube/profiles/stopped-upgrade-634233/client.crt", KeyFile:"/home/jenkins/minikube-integration/17488-80960/.minikube/profiles/stopped-upgrade-634233/client.key", CAFile:"/home/jenkins/minikube-integration/17488-80960/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1025 21:57:51.448946 112102 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I1025 21:57:51.454202 112102 kubeadm.go:602] needs reconfigure: configs differ:
** stderr **
diff: can't stat '/var/tmp/minikube/kubeadm.yaml': No such file or directory
** /stderr **
I1025 21:57:51.454218 112102 kubeadm.go:1128] stopping kube-system containers ...
I1025 21:57:51.454276 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1025 21:57:51.489478 112102 docker.go:464] Stopping containers: [111a4f5088ac 53138481ecbd a131edff470e 09fabc795729 46604f6a66ea 2d616a9c0cbc eab03f304139 a4dfe92c6dc7 52d24719c1f3 ecbc25e58349]
I1025 21:57:51.489556 112102 ssh_runner.go:195] Run: docker stop 111a4f5088ac 53138481ecbd a131edff470e 09fabc795729 46604f6a66ea 2d616a9c0cbc eab03f304139 a4dfe92c6dc7 52d24719c1f3 ecbc25e58349
I1025 21:57:51.531256 112102 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I1025 21:57:51.543077 112102 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1025 21:57:51.550268 112102 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1025 21:57:51.550336 112102 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1025 21:57:51.557283 112102 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I1025 21:57:51.557307 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I1025 21:57:51.629509 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I1025 21:57:52.686417 112102 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.05686163s)
I1025 21:57:52.686457 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I1025 21:57:52.944829 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I1025 21:57:53.063902 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I1025 21:57:53.173371 112102 api_server.go:52] waiting for apiserver process to appear ...
I1025 21:57:53.173454 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:57:53.187999 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:57:53.697472 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:57:54.197984 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:57:54.697638 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:57:55.197736 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:57:55.697345 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:57:56.197293 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:57:56.697277 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:57:57.197744 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:57:57.697308 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:57:58.198145 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:57:58.697515 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:57:59.197973 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:57:59.698007 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:00.197325 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:00.697580 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:01.197459 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:01.697471 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:02.197683 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:02.698234 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:03.197789 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:03.697521 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:04.198089 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:04.697863 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:05.197359 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:05.698192 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:06.197339 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:06.697321 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:07.197823 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:07.697508 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:08.197340 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:08.697359 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:09.198155 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:09.697985 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:10.197319 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:10.698299 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:11.197871 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:11.697358 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:12.197827 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:12.697644 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:13.197514 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:13.698125 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:14.197655 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:14.697698 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:15.197343 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:15.697296 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:16.200675 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:16.697598 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:17.197527 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:17.699520 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:18.197999 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:18.697318 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:19.197427 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:19.698114 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:20.197223 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:20.697685 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:21.197739 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:21.698068 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:22.197730 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:22.698086 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:23.197325 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:23.698261 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:24.197867 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:24.698110 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:25.197398 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:25.697858 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:26.197314 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:26.697534 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:27.208952 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:27.697321 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:28.197987 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:28.698117 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:29.197909 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:29.698056 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:30.197638 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:30.698096 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:31.197402 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:31.698244 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:32.197911 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:32.697362 112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 21:58:32.707681 112102 api_server.go:72] duration metric: took 39.534307295s to wait for apiserver process to appear ...
I1025 21:58:32.707711 112102 api_server.go:88] waiting for apiserver healthz status ...
I1025 21:58:32.707731 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:32.708721 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:32.708789 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:32.709323 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:33.210148 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:33.210940 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:33.709487 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:33.710073 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:34.210185 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:34.210896 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:34.709436 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:34.710147 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:35.209617 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:35.210240 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:35.709787 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:35.710463 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:36.209807 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:36.210463 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:36.710078 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:36.710839 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:37.209975 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:37.210593 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:37.710182 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:37.710782 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:38.210399 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:38.211082 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:38.709530 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:38.710155 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:39.210275 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:39.210866 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:39.710160 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:39.710772 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:40.210390 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:40.210985 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:40.710229 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:40.710837 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:41.210455 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:41.211126 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:41.709816 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:41.710389 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:42.210057 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:42.210725 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:42.710341 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:42.710887 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:43.209412 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:43.209918 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:43.710307 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:43.710917 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:44.209971 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:44.210602 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:44.710234 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:44.710853 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:45.210501 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:45.211310 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:45.709512 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:45.710117 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:46.209678 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:46.210488 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:46.710059 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:46.710656 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:47.210450 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:47.211062 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:47.710358 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:47.710985 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:48.210206 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:48.210880 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:48.709404 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:48.710014 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:49.210025 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:49.210895 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:49.710044 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:49.710791 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:50.210343 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:50.210955 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:50.709959 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:50.710624 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:51.210276 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:51.210948 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:51.709697 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:51.710376 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:52.210290 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:52.211031 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:52.710316 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:52.710977 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:53.209542 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:53.210341 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:53.709863 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:53.710577 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:54.209553 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:54.210250 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:54.709667 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:54.710368 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:55.209947 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:55.210627 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:55.710166 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:55.710848 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:56.210468 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:56.211173 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:56.709705 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:56.710366 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:57.210362 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:57.211132 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:57.709746 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:57.710411 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:58.209502 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:58.210078 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:58.709619 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:58.710316 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:59.210128 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:59.210748 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:58:59.710476 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:58:59.711241 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:00.209701 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:00.210495 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:00.710093 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:00.710624 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:01.210227 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:01.210838 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:01.709506 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:01.710179 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:02.210102 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:02.212604 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:02.710269 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:02.710889 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:03.209458 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:03.210085 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:03.709644 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:03.710269 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:04.210280 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:04.211015 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:04.709538 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:04.710289 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:05.209817 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:05.210566 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:05.709858 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:05.710516 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:06.209747 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:06.210470 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:06.710141 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:06.710773 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:07.209984 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:07.210686 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:07.710309 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:07.710946 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:08.209473 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:08.210141 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:08.710338 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:08.710898 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:09.210190 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:09.210821 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:09.710115 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:09.710782 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:10.210085 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:10.210770 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:10.710336 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:10.711000 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:11.209559 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:11.210254 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:11.709865 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:11.710586 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:12.210355 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:12.210940 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:12.709487 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:12.710163 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:13.209712 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:13.210388 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:13.709947 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:13.710631 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:14.210417 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:14.210951 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:14.709489 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:14.710118 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:15.209668 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:15.210289 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:15.709927 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:15.710610 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:16.210322 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:16.211138 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:16.709681 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:16.710367 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:17.210418 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:17.211086 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:17.709632 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:17.710312 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:18.209506 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:18.210210 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:18.709757 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:18.710506 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:19.209477 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:19.210099 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:19.709476 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:19.710136 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:20.210368 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:20.211099 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:20.709488 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:20.710098 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:21.209668 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:21.210461 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:21.709923 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:21.710552 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:22.210271 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:22.211003 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:22.710396 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:22.711085 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:23.209611 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:23.210254 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:23.710469 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:23.711058 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:24.209846 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:24.210469 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:24.710017 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:24.710608 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:25.209760 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:25.210443 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:25.709743 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:25.710473 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:26.210089 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:26.210734 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:26.709963 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:26.710627 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:27.210338 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:27.211038 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:27.710244 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:27.711037 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:28.210433 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:28.211284 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:28.709825 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:28.710496 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:29.210468 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:29.211125 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:29.709488 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:29.710254 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:30.209498 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:30.210197 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:30.709517 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:30.710142 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:31.209708 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:31.210398 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:31.710105 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:31.710810 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:32.210092 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:32.210753 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:32.710490 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1025 21:59:32.764127 112102 logs.go:284] 1 containers: [615f2a0c1ed5]
I1025 21:59:32.764211 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1025 21:59:32.809251 112102 logs.go:284] 2 containers: [351d1be3fc41 111a4f5088ac]
I1025 21:59:32.809345 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1025 21:59:32.855804 112102 logs.go:284] 0 containers: []
W1025 21:59:32.855828 112102 logs.go:286] No container was found matching "coredns"
I1025 21:59:32.855882 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1025 21:59:32.921645 112102 logs.go:284] 2 containers: [8464245274b1 09fabc795729]
I1025 21:59:32.921715 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1025 21:59:32.961462 112102 logs.go:284] 0 containers: []
W1025 21:59:32.961490 112102 logs.go:286] No container was found matching "kube-proxy"
I1025 21:59:32.961549 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1025 21:59:32.997755 112102 logs.go:284] 2 containers: [a3ae303714a2 53138481ecbd]
I1025 21:59:32.997850 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I1025 21:59:33.034926 112102 logs.go:284] 0 containers: []
W1025 21:59:33.034957 112102 logs.go:286] No container was found matching "kindnet"
I1025 21:59:33.034970 112102 logs.go:123] Gathering logs for kubelet ...
I1025 21:59:33.034986 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1025 21:59:33.074719 112102 logs.go:138] Found kubelet problem: Oct 25 21:59:14 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:14.747557 6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 21:59:33.076385 112102 logs.go:138] Found kubelet problem: Oct 25 21:59:15 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:15.692044 6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 21:59:33.082949 112102 logs.go:138] Found kubelet problem: Oct 25 21:59:19 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:19.485561 6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 21:59:33.092592 112102 logs.go:138] Found kubelet problem: Oct 25 21:59:25 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:25.808146 6297 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
W1025 21:59:33.095629 112102 logs.go:138] Found kubelet problem: Oct 25 21:59:27 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:27.816404 6297 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
W1025 21:59:33.100891 112102 logs.go:138] Found kubelet problem: Oct 25 21:59:30 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:30.889849 6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 21:59:33.103159 112102 logs.go:138] Found kubelet problem: Oct 25 21:59:31 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:31.919778 6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
I1025 21:59:33.105196 112102 logs.go:123] Gathering logs for dmesg ...
I1025 21:59:33.105220 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1025 21:59:33.117307 112102 logs.go:123] Gathering logs for describe nodes ...
I1025 21:59:33.117331 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1025 21:59:33.194424 112102 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
error: tls: private key does not match public key
output:
** stderr **
error: tls: private key does not match public key
** /stderr **
I1025 21:59:33.194451 112102 logs.go:123] Gathering logs for etcd [351d1be3fc41] ...
I1025 21:59:33.194469 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351d1be3fc41"
I1025 21:59:33.235178 112102 logs.go:123] Gathering logs for kube-scheduler [8464245274b1] ...
I1025 21:59:33.235216 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8464245274b1"
I1025 21:59:33.354812 112102 logs.go:123] Gathering logs for kube-scheduler [09fabc795729] ...
I1025 21:59:33.354846 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09fabc795729"
I1025 21:59:33.406183 112102 logs.go:123] Gathering logs for kube-controller-manager [a3ae303714a2] ...
I1025 21:59:33.406213 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3ae303714a2"
I1025 21:59:33.450038 112102 logs.go:123] Gathering logs for kube-apiserver [615f2a0c1ed5] ...
I1025 21:59:33.450071 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615f2a0c1ed5"
I1025 21:59:33.524729 112102 logs.go:123] Gathering logs for etcd [111a4f5088ac] ...
I1025 21:59:33.524761 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 111a4f5088ac"
I1025 21:59:33.567717 112102 logs.go:123] Gathering logs for kube-controller-manager [53138481ecbd] ...
I1025 21:59:33.567750 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53138481ecbd"
I1025 21:59:33.614986 112102 logs.go:123] Gathering logs for Docker ...
I1025 21:59:33.615015 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I1025 21:59:33.656670 112102 logs.go:123] Gathering logs for container status ...
I1025 21:59:33.656707 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1025 21:59:33.684116 112102 out.go:309] Setting ErrFile to fd 2...
I1025 21:59:33.684148 112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
W1025 21:59:33.684211 112102 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W1025 21:59:33.684262 112102 out.go:239] Oct 25 21:59:19 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:19.485561 6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 21:59:19 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:19.485561 6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 21:59:33.684285 112102 out.go:239] Oct 25 21:59:25 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:25.808146 6297 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
Oct 25 21:59:25 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:25.808146 6297 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
W1025 21:59:33.684297 112102 out.go:239] Oct 25 21:59:27 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:27.816404 6297 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
Oct 25 21:59:27 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:27.816404 6297 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
W1025 21:59:33.684305 112102 out.go:239] Oct 25 21:59:30 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:30.889849 6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 21:59:30 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:30.889849 6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 21:59:33.684316 112102 out.go:239] Oct 25 21:59:31 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:31.919778 6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 21:59:31 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:31.919778 6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
I1025 21:59:33.684329 112102 out.go:309] Setting ErrFile to fd 2...
I1025 21:59:33.684338 112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 21:59:43.685399 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:43.686177 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:43.686302 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1025 21:59:43.719867 112102 logs.go:284] 1 containers: [615f2a0c1ed5]
I1025 21:59:43.719964 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1025 21:59:43.759211 112102 logs.go:284] 2 containers: [351d1be3fc41 111a4f5088ac]
I1025 21:59:43.759282 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1025 21:59:43.796185 112102 logs.go:284] 0 containers: []
W1025 21:59:43.796227 112102 logs.go:286] No container was found matching "coredns"
I1025 21:59:43.796303 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1025 21:59:43.834577 112102 logs.go:284] 2 containers: [8464245274b1 09fabc795729]
I1025 21:59:43.834659 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1025 21:59:43.874092 112102 logs.go:284] 0 containers: []
W1025 21:59:43.874121 112102 logs.go:286] No container was found matching "kube-proxy"
I1025 21:59:43.874196 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1025 21:59:43.921307 112102 logs.go:284] 3 containers: [c83026fba0c7 a3ae303714a2 53138481ecbd]
I1025 21:59:43.921435 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I1025 21:59:43.957485 112102 logs.go:284] 0 containers: []
W1025 21:59:43.957512 112102 logs.go:286] No container was found matching "kindnet"
I1025 21:59:43.957533 112102 logs.go:123] Gathering logs for etcd [111a4f5088ac] ...
I1025 21:59:43.957552 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 111a4f5088ac"
I1025 21:59:43.999404 112102 logs.go:123] Gathering logs for kube-controller-manager [53138481ecbd] ...
I1025 21:59:43.999440 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53138481ecbd"
I1025 21:59:44.053794 112102 logs.go:123] Gathering logs for kube-apiserver [615f2a0c1ed5] ...
I1025 21:59:44.053828 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615f2a0c1ed5"
I1025 21:59:44.118900 112102 logs.go:123] Gathering logs for kube-controller-manager [c83026fba0c7] ...
I1025 21:59:44.118931 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c83026fba0c7"
I1025 21:59:44.152873 112102 logs.go:123] Gathering logs for Docker ...
I1025 21:59:44.152924 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I1025 21:59:44.183998 112102 logs.go:123] Gathering logs for describe nodes ...
I1025 21:59:44.184032 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1025 21:59:44.251422 112102 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
error: tls: private key does not match public key
output:
** stderr **
error: tls: private key does not match public key
** /stderr **
I1025 21:59:44.251445 112102 logs.go:123] Gathering logs for kube-controller-manager [a3ae303714a2] ...
I1025 21:59:44.251460 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3ae303714a2"
I1025 21:59:44.289549 112102 logs.go:123] Gathering logs for kubelet ...
I1025 21:59:44.289579 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1025 21:59:44.308813 112102 logs.go:138] Found kubelet problem: Oct 25 21:59:19 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:19.485561 6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 21:59:44.318555 112102 logs.go:138] Found kubelet problem: Oct 25 21:59:25 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:25.808146 6297 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
W1025 21:59:44.321643 112102 logs.go:138] Found kubelet problem: Oct 25 21:59:27 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:27.816404 6297 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
W1025 21:59:44.326759 112102 logs.go:138] Found kubelet problem: Oct 25 21:59:30 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:30.889849 6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 21:59:44.328729 112102 logs.go:138] Found kubelet problem: Oct 25 21:59:31 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:31.919778 6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 21:59:44.340716 112102 logs.go:138] Found kubelet problem: Oct 25 21:59:39 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:39.483253 6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
I1025 21:59:44.348449 112102 logs.go:123] Gathering logs for dmesg ...
I1025 21:59:44.348471 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1025 21:59:44.364333 112102 logs.go:123] Gathering logs for etcd [351d1be3fc41] ...
I1025 21:59:44.364364 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351d1be3fc41"
I1025 21:59:44.418505 112102 logs.go:123] Gathering logs for kube-scheduler [8464245274b1] ...
I1025 21:59:44.418541 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8464245274b1"
I1025 21:59:44.525393 112102 logs.go:123] Gathering logs for kube-scheduler [09fabc795729] ...
I1025 21:59:44.525439 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09fabc795729"
I1025 21:59:44.573844 112102 logs.go:123] Gathering logs for container status ...
I1025 21:59:44.573876 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1025 21:59:44.598877 112102 out.go:309] Setting ErrFile to fd 2...
I1025 21:59:44.598902 112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
W1025 21:59:44.598957 112102 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W1025 21:59:44.599004 112102 out.go:239] Oct 25 21:59:25 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:25.808146 6297 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
Oct 25 21:59:25 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:25.808146 6297 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
W1025 21:59:44.599022 112102 out.go:239] Oct 25 21:59:27 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:27.816404 6297 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
Oct 25 21:59:27 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:27.816404 6297 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
W1025 21:59:44.599031 112102 out.go:239] Oct 25 21:59:30 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:30.889849 6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 21:59:30 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:30.889849 6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 21:59:44.599041 112102 out.go:239] Oct 25 21:59:31 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:31.919778 6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 21:59:31 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:31.919778 6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 21:59:44.599055 112102 out.go:239] Oct 25 21:59:39 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:39.483253 6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 21:59:39 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:39.483253 6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
I1025 21:59:44.599067 112102 out.go:309] Setting ErrFile to fd 2...
I1025 21:59:44.599077 112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 21:59:54.599724 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 21:59:54.600379 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 21:59:54.600496 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1025 21:59:54.637503 112102 logs.go:284] 1 containers: [6f3f3376dd08]
I1025 21:59:54.637588 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1025 21:59:54.677247 112102 logs.go:284] 2 containers: [351d1be3fc41 111a4f5088ac]
I1025 21:59:54.677331 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1025 21:59:54.713087 112102 logs.go:284] 0 containers: []
W1025 21:59:54.713114 112102 logs.go:286] No container was found matching "coredns"
I1025 21:59:54.713176 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1025 21:59:54.753285 112102 logs.go:284] 2 containers: [8464245274b1 09fabc795729]
I1025 21:59:54.753358 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1025 21:59:54.792252 112102 logs.go:284] 0 containers: []
W1025 21:59:54.792279 112102 logs.go:286] No container was found matching "kube-proxy"
I1025 21:59:54.792343 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1025 21:59:54.848606 112102 logs.go:284] 3 containers: [7792b9b4e0ee c83026fba0c7 53138481ecbd]
I1025 21:59:54.848702 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I1025 21:59:54.890540 112102 logs.go:284] 0 containers: []
W1025 21:59:54.890567 112102 logs.go:286] No container was found matching "kindnet"
I1025 21:59:54.890588 112102 logs.go:123] Gathering logs for kubelet ...
I1025 21:59:54.890605 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1025 21:59:54.922924 112102 logs.go:138] Found kubelet problem: Oct 25 21:59:39 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:39.483253 6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 21:59:54.955771 112102 logs.go:138] Found kubelet problem: Oct 25 21:59:52 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:52.148352 7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 21:59:54.958127 112102 logs.go:138] Found kubelet problem: Oct 25 21:59:53 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:53.158062 7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
I1025 21:59:54.960784 112102 logs.go:123] Gathering logs for dmesg ...
I1025 21:59:54.960809 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1025 21:59:54.972726 112102 logs.go:123] Gathering logs for kube-controller-manager [7792b9b4e0ee] ...
I1025 21:59:54.972753 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7792b9b4e0ee"
I1025 21:59:55.016703 112102 logs.go:123] Gathering logs for kube-controller-manager [c83026fba0c7] ...
I1025 21:59:55.016739 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c83026fba0c7"
I1025 21:59:55.063153 112102 logs.go:123] Gathering logs for kube-apiserver [6f3f3376dd08] ...
I1025 21:59:55.063193 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f3f3376dd08"
I1025 21:59:55.132083 112102 logs.go:123] Gathering logs for etcd [351d1be3fc41] ...
I1025 21:59:55.132120 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351d1be3fc41"
I1025 21:59:55.213579 112102 logs.go:123] Gathering logs for container status ...
I1025 21:59:55.213613 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1025 21:59:55.240458 112102 logs.go:123] Gathering logs for kube-scheduler [8464245274b1] ...
I1025 21:59:55.240500 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8464245274b1"
I1025 21:59:55.332880 112102 logs.go:123] Gathering logs for kube-scheduler [09fabc795729] ...
I1025 21:59:55.332919 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09fabc795729"
I1025 21:59:55.374188 112102 logs.go:123] Gathering logs for Docker ...
I1025 21:59:55.374225 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I1025 21:59:55.409677 112102 logs.go:123] Gathering logs for describe nodes ...
I1025 21:59:55.409725 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1025 21:59:55.491633 112102 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
error: tls: private key does not match public key
output:
** stderr **
error: tls: private key does not match public key
** /stderr **
I1025 21:59:55.491657 112102 logs.go:123] Gathering logs for etcd [111a4f5088ac] ...
I1025 21:59:55.491670 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 111a4f5088ac"
I1025 21:59:55.532938 112102 logs.go:123] Gathering logs for kube-controller-manager [53138481ecbd] ...
I1025 21:59:55.532973 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53138481ecbd"
I1025 21:59:55.582374 112102 out.go:309] Setting ErrFile to fd 2...
I1025 21:59:55.582410 112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
W1025 21:59:55.582481 112102 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W1025 21:59:55.582497 112102 out.go:239] Oct 25 21:59:39 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:39.483253 6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 21:59:39 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:39.483253 6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 21:59:55.582509 112102 out.go:239] Oct 25 21:59:52 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:52.148352 7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 21:59:52 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:52.148352 7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 21:59:55.582520 112102 out.go:239] Oct 25 21:59:53 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:53.158062 7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 21:59:53 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:53.158062 7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
I1025 21:59:55.582530 112102 out.go:309] Setting ErrFile to fd 2...
I1025 21:59:55.582539 112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 22:00:05.583121 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 22:00:05.583754 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 22:00:05.583869 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1025 22:00:05.628720 112102 logs.go:284] 1 containers: [6f3f3376dd08]
I1025 22:00:05.628818 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1025 22:00:05.669782 112102 logs.go:284] 2 containers: [351d1be3fc41 111a4f5088ac]
I1025 22:00:05.669878 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1025 22:00:05.710788 112102 logs.go:284] 0 containers: []
W1025 22:00:05.710815 112102 logs.go:286] No container was found matching "coredns"
I1025 22:00:05.710876 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1025 22:00:05.744925 112102 logs.go:284] 2 containers: [8464245274b1 09fabc795729]
I1025 22:00:05.745017 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1025 22:00:05.780243 112102 logs.go:284] 0 containers: []
W1025 22:00:05.780275 112102 logs.go:286] No container was found matching "kube-proxy"
I1025 22:00:05.780337 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1025 22:00:05.816325 112102 logs.go:284] 2 containers: [7792b9b4e0ee 53138481ecbd]
I1025 22:00:05.816428 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I1025 22:00:05.850408 112102 logs.go:284] 0 containers: []
W1025 22:00:05.850431 112102 logs.go:286] No container was found matching "kindnet"
I1025 22:00:05.850445 112102 logs.go:123] Gathering logs for container status ...
I1025 22:00:05.850463 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1025 22:00:05.873965 112102 logs.go:123] Gathering logs for kubelet ...
I1025 22:00:05.874004 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1025 22:00:05.909283 112102 logs.go:138] Found kubelet problem: Oct 25 21:59:52 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:52.148352 7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:00:05.911140 112102 logs.go:138] Found kubelet problem: Oct 25 21:59:53 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:53.158062 7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:00:05.919483 112102 logs.go:138] Found kubelet problem: Oct 25 21:59:58 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:58.193072 7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:00:05.930738 112102 logs.go:138] Found kubelet problem: Oct 25 22:00:05 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:05.313295 7903 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
I1025 22:00:05.931642 112102 logs.go:123] Gathering logs for etcd [351d1be3fc41] ...
I1025 22:00:05.931661 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351d1be3fc41"
I1025 22:00:05.979912 112102 logs.go:123] Gathering logs for etcd [111a4f5088ac] ...
I1025 22:00:05.979952 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 111a4f5088ac"
I1025 22:00:06.018284 112102 logs.go:123] Gathering logs for kube-scheduler [8464245274b1] ...
I1025 22:00:06.018323 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8464245274b1"
I1025 22:00:06.120746 112102 logs.go:123] Gathering logs for kube-scheduler [09fabc795729] ...
I1025 22:00:06.120793 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09fabc795729"
I1025 22:00:06.168084 112102 logs.go:123] Gathering logs for kube-controller-manager [7792b9b4e0ee] ...
I1025 22:00:06.168120 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7792b9b4e0ee"
I1025 22:00:06.205023 112102 logs.go:123] Gathering logs for kube-controller-manager [53138481ecbd] ...
I1025 22:00:06.205059 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53138481ecbd"
I1025 22:00:06.268313 112102 logs.go:123] Gathering logs for Docker ...
I1025 22:00:06.268349 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I1025 22:00:06.302210 112102 logs.go:123] Gathering logs for dmesg ...
I1025 22:00:06.302242 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1025 22:00:06.313419 112102 logs.go:123] Gathering logs for describe nodes ...
I1025 22:00:06.313453 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1025 22:00:06.388866 112102 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
error: tls: private key does not match public key
output:
** stderr **
error: tls: private key does not match public key
** /stderr **
I1025 22:00:06.388912 112102 logs.go:123] Gathering logs for kube-apiserver [6f3f3376dd08] ...
I1025 22:00:06.388930 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f3f3376dd08"
I1025 22:00:06.461096 112102 out.go:309] Setting ErrFile to fd 2...
I1025 22:00:06.461126 112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
W1025 22:00:06.461189 112102 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W1025 22:00:06.461204 112102 out.go:239] Oct 25 21:59:52 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:52.148352 7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 21:59:52 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:52.148352 7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:00:06.461213 112102 out.go:239] Oct 25 21:59:53 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:53.158062 7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 21:59:53 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:53.158062 7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:00:06.461222 112102 out.go:239] Oct 25 21:59:58 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:58.193072 7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 21:59:58 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:58.193072 7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:00:06.461228 112102 out.go:239] Oct 25 22:00:05 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:05.313295 7903 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
Oct 25 22:00:05 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:05.313295 7903 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
I1025 22:00:06.461238 112102 out.go:309] Setting ErrFile to fd 2...
I1025 22:00:06.461246 112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 22:00:16.462648 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 22:00:16.463418 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 22:00:16.463519 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1025 22:00:16.505496 112102 logs.go:284] 1 containers: [0512b49c1a2e]
I1025 22:00:16.505584 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1025 22:00:16.537874 112102 logs.go:284] 2 containers: [351d1be3fc41 111a4f5088ac]
I1025 22:00:16.537979 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1025 22:00:16.569920 112102 logs.go:284] 0 containers: []
W1025 22:00:16.569947 112102 logs.go:286] No container was found matching "coredns"
I1025 22:00:16.570030 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1025 22:00:16.601152 112102 logs.go:284] 2 containers: [8464245274b1 09fabc795729]
I1025 22:00:16.601239 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1025 22:00:16.637702 112102 logs.go:284] 0 containers: []
W1025 22:00:16.637729 112102 logs.go:286] No container was found matching "kube-proxy"
I1025 22:00:16.637792 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1025 22:00:16.673917 112102 logs.go:284] 2 containers: [7792b9b4e0ee 53138481ecbd]
I1025 22:00:16.674009 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I1025 22:00:16.709847 112102 logs.go:284] 0 containers: []
W1025 22:00:16.709877 112102 logs.go:286] No container was found matching "kindnet"
I1025 22:00:16.709892 112102 logs.go:123] Gathering logs for etcd [111a4f5088ac] ...
I1025 22:00:16.709914 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 111a4f5088ac"
I1025 22:00:16.753177 112102 logs.go:123] Gathering logs for kube-controller-manager [7792b9b4e0ee] ...
I1025 22:00:16.753213 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7792b9b4e0ee"
I1025 22:00:16.793850 112102 logs.go:123] Gathering logs for kube-controller-manager [53138481ecbd] ...
I1025 22:00:16.793895 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53138481ecbd"
I1025 22:00:16.842129 112102 logs.go:123] Gathering logs for container status ...
I1025 22:00:16.842162 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1025 22:00:16.860456 112102 logs.go:123] Gathering logs for kubelet ...
I1025 22:00:16.860487 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1025 22:00:16.880660 112102 logs.go:138] Found kubelet problem: Oct 25 21:59:53 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:53.158062 7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:00:16.896481 112102 logs.go:138] Found kubelet problem: Oct 25 21:59:58 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:58.193072 7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:00:16.909020 112102 logs.go:138] Found kubelet problem: Oct 25 22:00:05 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:05.313295 7903 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
W1025 22:00:16.918759 112102 logs.go:138] Found kubelet problem: Oct 25 22:00:11 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:11.353514 7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:00:16.920836 112102 logs.go:138] Found kubelet problem: Oct 25 22:00:12 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:12.560217 7903 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
I1025 22:00:16.927678 112102 logs.go:123] Gathering logs for dmesg ...
I1025 22:00:16.927700 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1025 22:00:16.939653 112102 logs.go:123] Gathering logs for describe nodes ...
I1025 22:00:16.939685 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1025 22:00:17.008883 112102 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
error: tls: private key does not match public key
output:
** stderr **
error: tls: private key does not match public key
** /stderr **
I1025 22:00:17.008908 112102 logs.go:123] Gathering logs for kube-apiserver [0512b49c1a2e] ...
I1025 22:00:17.008928 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0512b49c1a2e"
I1025 22:00:17.088124 112102 logs.go:123] Gathering logs for etcd [351d1be3fc41] ...
I1025 22:00:17.088157 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351d1be3fc41"
I1025 22:00:17.126843 112102 logs.go:123] Gathering logs for kube-scheduler [8464245274b1] ...
I1025 22:00:17.126887 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8464245274b1"
I1025 22:00:17.236070 112102 logs.go:123] Gathering logs for kube-scheduler [09fabc795729] ...
I1025 22:00:17.236109 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09fabc795729"
I1025 22:00:17.281446 112102 logs.go:123] Gathering logs for Docker ...
I1025 22:00:17.281485 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I1025 22:00:17.319438 112102 out.go:309] Setting ErrFile to fd 2...
I1025 22:00:17.319471 112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
W1025 22:00:17.319527 112102 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W1025 22:00:17.319535 112102 out.go:239] Oct 25 21:59:53 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:53.158062 7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 21:59:53 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:53.158062 7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:00:17.319552 112102 out.go:239] Oct 25 21:59:58 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:58.193072 7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 21:59:58 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:58.193072 7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:00:17.319558 112102 out.go:239] Oct 25 22:00:05 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:05.313295 7903 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
Oct 25 22:00:05 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:05.313295 7903 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
W1025 22:00:17.319569 112102 out.go:239] Oct 25 22:00:11 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:11.353514 7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 22:00:11 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:11.353514 7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:00:17.319580 112102 out.go:239] Oct 25 22:00:12 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:12.560217 7903 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
Oct 25 22:00:12 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:12.560217 7903 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
I1025 22:00:17.319597 112102 out.go:309] Setting ErrFile to fd 2...
I1025 22:00:17.319605 112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 22:00:27.320410 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 22:00:27.321010 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 22:00:27.321092 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1025 22:00:27.356154 112102 logs.go:284] 1 containers: [0512b49c1a2e]
I1025 22:00:27.356218 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1025 22:00:27.393164 112102 logs.go:284] 2 containers: [351d1be3fc41 111a4f5088ac]
I1025 22:00:27.393254 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1025 22:00:27.426940 112102 logs.go:284] 0 containers: []
W1025 22:00:27.426962 112102 logs.go:286] No container was found matching "coredns"
I1025 22:00:27.427010 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1025 22:00:27.461064 112102 logs.go:284] 2 containers: [8464245274b1 09fabc795729]
I1025 22:00:27.461150 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1025 22:00:27.499676 112102 logs.go:284] 0 containers: []
W1025 22:00:27.499708 112102 logs.go:286] No container was found matching "kube-proxy"
I1025 22:00:27.499771 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1025 22:00:27.531782 112102 logs.go:284] 3 containers: [16645aa4516e 7792b9b4e0ee 53138481ecbd]
I1025 22:00:27.531865 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I1025 22:00:27.561803 112102 logs.go:284] 0 containers: []
W1025 22:00:27.561832 112102 logs.go:286] No container was found matching "kindnet"
I1025 22:00:27.561851 112102 logs.go:123] Gathering logs for describe nodes ...
I1025 22:00:27.561869 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1025 22:00:27.636933 112102 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
error: tls: private key does not match public key
output:
** stderr **
error: tls: private key does not match public key
** /stderr **
I1025 22:00:27.636957 112102 logs.go:123] Gathering logs for kube-apiserver [0512b49c1a2e] ...
I1025 22:00:27.636968 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0512b49c1a2e"
I1025 22:00:27.705703 112102 logs.go:123] Gathering logs for etcd [351d1be3fc41] ...
I1025 22:00:27.705740 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351d1be3fc41"
I1025 22:00:27.750043 112102 logs.go:123] Gathering logs for kube-scheduler [09fabc795729] ...
I1025 22:00:27.750071 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09fabc795729"
I1025 22:00:27.797353 112102 logs.go:123] Gathering logs for dmesg ...
I1025 22:00:27.797398 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1025 22:00:27.806863 112102 logs.go:123] Gathering logs for etcd [111a4f5088ac] ...
I1025 22:00:27.806894 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 111a4f5088ac"
I1025 22:00:27.855830 112102 logs.go:123] Gathering logs for kubelet ...
I1025 22:00:27.855863 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1025 22:00:27.886260 112102 logs.go:138] Found kubelet problem: Oct 25 22:00:05 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:05.313295 7903 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
W1025 22:00:27.901849 112102 logs.go:138] Found kubelet problem: Oct 25 22:00:11 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:11.353514 7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:00:27.903887 112102 logs.go:138] Found kubelet problem: Oct 25 22:00:12 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:12.560217 7903 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
W1025 22:00:27.913116 112102 logs.go:138] Found kubelet problem: Oct 25 22:00:18 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:18.187708 7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
I1025 22:00:27.926982 112102 logs.go:123] Gathering logs for kube-controller-manager [16645aa4516e] ...
I1025 22:00:27.927014 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16645aa4516e"
I1025 22:00:27.966796 112102 logs.go:123] Gathering logs for Docker ...
I1025 22:00:27.966830 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I1025 22:00:27.998909 112102 logs.go:123] Gathering logs for container status ...
I1025 22:00:27.998948 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1025 22:00:29.018614 112102 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (1.01964037s)
I1025 22:00:29.019223 112102 logs.go:123] Gathering logs for kube-scheduler [8464245274b1] ...
I1025 22:00:29.019238 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8464245274b1"
I1025 22:00:29.117653 112102 logs.go:123] Gathering logs for kube-controller-manager [7792b9b4e0ee] ...
I1025 22:00:29.117691 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7792b9b4e0ee"
W1025 22:00:29.199848 112102 logs.go:130] failed kube-controller-manager [7792b9b4e0ee]: command: /bin/bash -c "docker logs --tail 400 7792b9b4e0ee" /bin/bash -c "docker logs --tail 400 7792b9b4e0ee": Process exited with status 1
stdout:
stderr:
Error: No such container: 7792b9b4e0ee
output:
** stderr **
Error: No such container: 7792b9b4e0ee
** /stderr **
I1025 22:00:29.199869 112102 logs.go:123] Gathering logs for kube-controller-manager [53138481ecbd] ...
I1025 22:00:29.199880 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53138481ecbd"
I1025 22:00:29.285044 112102 out.go:309] Setting ErrFile to fd 2...
I1025 22:00:29.285078 112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
W1025 22:00:29.285137 112102 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W1025 22:00:29.285147 112102 out.go:239] Oct 25 22:00:05 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:05.313295 7903 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
Oct 25 22:00:05 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:05.313295 7903 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
W1025 22:00:29.285155 112102 out.go:239] Oct 25 22:00:11 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:11.353514 7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 22:00:11 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:11.353514 7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:00:29.285161 112102 out.go:239] Oct 25 22:00:12 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:12.560217 7903 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
Oct 25 22:00:12 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:12.560217 7903 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
W1025 22:00:29.285166 112102 out.go:239] Oct 25 22:00:18 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:18.187708 7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 22:00:18 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:18.187708 7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
I1025 22:00:29.285172 112102 out.go:309] Setting ErrFile to fd 2...
I1025 22:00:29.285178 112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 22:00:39.286616 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 22:00:39.287364 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 22:00:39.287477 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1025 22:00:39.321781 112102 logs.go:284] 1 containers: [4020488488c9]
I1025 22:00:39.321867 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1025 22:00:39.352191 112102 logs.go:284] 2 containers: [351d1be3fc41 111a4f5088ac]
I1025 22:00:39.352296 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1025 22:00:39.382433 112102 logs.go:284] 0 containers: []
W1025 22:00:39.382465 112102 logs.go:286] No container was found matching "coredns"
I1025 22:00:39.382525 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1025 22:00:39.411537 112102 logs.go:284] 2 containers: [8464245274b1 09fabc795729]
I1025 22:00:39.411626 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1025 22:00:39.439791 112102 logs.go:284] 0 containers: []
W1025 22:00:39.439815 112102 logs.go:286] No container was found matching "kube-proxy"
I1025 22:00:39.439879 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1025 22:00:39.473548 112102 logs.go:284] 3 containers: [e1d2be52be40 16645aa4516e 53138481ecbd]
I1025 22:00:39.473640 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I1025 22:00:39.513005 112102 logs.go:284] 0 containers: []
W1025 22:00:39.513037 112102 logs.go:286] No container was found matching "kindnet"
I1025 22:00:39.513054 112102 logs.go:123] Gathering logs for etcd [111a4f5088ac] ...
I1025 22:00:39.513068 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 111a4f5088ac"
I1025 22:00:39.551327 112102 logs.go:123] Gathering logs for kube-scheduler [09fabc795729] ...
I1025 22:00:39.551357 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09fabc795729"
I1025 22:00:39.594523 112102 logs.go:123] Gathering logs for kube-controller-manager [16645aa4516e] ...
I1025 22:00:39.594562 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16645aa4516e"
I1025 22:00:39.631307 112102 logs.go:123] Gathering logs for Docker ...
I1025 22:00:39.631338 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I1025 22:00:39.664002 112102 logs.go:123] Gathering logs for kubelet ...
I1025 22:00:39.664033 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1025 22:00:39.705088 112102 logs.go:138] Found kubelet problem: Oct 25 22:00:30 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:30.571064 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:00:39.706759 112102 logs.go:138] Found kubelet problem: Oct 25 22:00:31 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:31.572055 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:00:39.708596 112102 logs.go:138] Found kubelet problem: Oct 25 22:00:32 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:32.570796 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
I1025 22:00:39.719904 112102 logs.go:123] Gathering logs for kube-apiserver [4020488488c9] ...
I1025 22:00:39.719929 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4020488488c9"
I1025 22:00:39.775543 112102 logs.go:123] Gathering logs for etcd [351d1be3fc41] ...
I1025 22:00:39.775575 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351d1be3fc41"
I1025 22:00:39.814971 112102 logs.go:123] Gathering logs for container status ...
I1025 22:00:39.815000 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1025 22:00:39.836093 112102 logs.go:123] Gathering logs for kube-scheduler [8464245274b1] ...
I1025 22:00:39.836121 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8464245274b1"
I1025 22:00:39.933410 112102 logs.go:123] Gathering logs for kube-controller-manager [e1d2be52be40] ...
I1025 22:00:39.933450 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1d2be52be40"
I1025 22:00:39.971070 112102 logs.go:123] Gathering logs for kube-controller-manager [53138481ecbd] ...
I1025 22:00:39.971099 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53138481ecbd"
I1025 22:00:40.012071 112102 logs.go:123] Gathering logs for dmesg ...
I1025 22:00:40.012108 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1025 22:00:40.021130 112102 logs.go:123] Gathering logs for describe nodes ...
I1025 22:00:40.021154 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1025 22:00:40.084858 112102 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
error: tls: private key does not match public key
output:
** stderr **
error: tls: private key does not match public key
** /stderr **
I1025 22:00:40.084893 112102 out.go:309] Setting ErrFile to fd 2...
I1025 22:00:40.084907 112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
W1025 22:00:40.084959 112102 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W1025 22:00:40.084971 112102 out.go:239] Oct 25 22:00:30 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:30.571064 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 22:00:30 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:30.571064 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:00:40.084978 112102 out.go:239] Oct 25 22:00:31 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:31.572055 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 22:00:31 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:31.572055 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:00:40.084984 112102 out.go:239] Oct 25 22:00:32 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:32.570796 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 22:00:32 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:32.570796 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
I1025 22:00:40.084989 112102 out.go:309] Setting ErrFile to fd 2...
I1025 22:00:40.084994 112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 22:00:50.086071 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 22:00:50.086710 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 22:00:50.086792 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1025 22:00:50.119920 112102 logs.go:284] 1 containers: [064ea6f86a9c]
I1025 22:00:50.120014 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1025 22:00:50.150402 112102 logs.go:284] 2 containers: [351d1be3fc41 111a4f5088ac]
I1025 22:00:50.150490 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1025 22:00:50.181431 112102 logs.go:284] 0 containers: []
W1025 22:00:50.181468 112102 logs.go:286] No container was found matching "coredns"
I1025 22:00:50.181529 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1025 22:00:50.212237 112102 logs.go:284] 2 containers: [8464245274b1 09fabc795729]
I1025 22:00:50.212354 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1025 22:00:50.242512 112102 logs.go:284] 0 containers: []
W1025 22:00:50.242540 112102 logs.go:286] No container was found matching "kube-proxy"
I1025 22:00:50.242605 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1025 22:00:50.275176 112102 logs.go:284] 4 containers: [1dd89316adc1 e1d2be52be40 16645aa4516e 53138481ecbd]
I1025 22:00:50.275275 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I1025 22:00:50.306176 112102 logs.go:284] 0 containers: []
W1025 22:00:50.306206 112102 logs.go:286] No container was found matching "kindnet"
I1025 22:00:50.306221 112102 logs.go:123] Gathering logs for kube-controller-manager [16645aa4516e] ...
I1025 22:00:50.306234 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16645aa4516e"
I1025 22:00:50.341164 112102 logs.go:123] Gathering logs for container status ...
I1025 22:00:50.341202 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1025 22:00:50.359484 112102 logs.go:123] Gathering logs for dmesg ...
I1025 22:00:50.359509 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1025 22:00:50.369107 112102 logs.go:123] Gathering logs for kube-apiserver [064ea6f86a9c] ...
I1025 22:00:50.369133 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064ea6f86a9c"
I1025 22:00:50.431383 112102 logs.go:123] Gathering logs for kube-controller-manager [e1d2be52be40] ...
I1025 22:00:50.431434 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1d2be52be40"
I1025 22:00:50.466028 112102 logs.go:123] Gathering logs for describe nodes ...
I1025 22:00:50.466059 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1025 22:00:50.546745 112102 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
error: tls: private key does not match public key
output:
** stderr **
error: tls: private key does not match public key
** /stderr **
I1025 22:00:50.546767 112102 logs.go:123] Gathering logs for kube-scheduler [09fabc795729] ...
I1025 22:00:50.546780 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09fabc795729"
I1025 22:00:50.593269 112102 logs.go:123] Gathering logs for kube-controller-manager [1dd89316adc1] ...
I1025 22:00:50.593302 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dd89316adc1"
I1025 22:00:50.631964 112102 logs.go:123] Gathering logs for kube-scheduler [8464245274b1] ...
I1025 22:00:50.631992 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8464245274b1"
I1025 22:00:50.743868 112102 logs.go:123] Gathering logs for Docker ...
I1025 22:00:50.743909 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I1025 22:00:50.780512 112102 logs.go:123] Gathering logs for kubelet ...
I1025 22:00:50.780544 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1025 22:00:50.804502 112102 logs.go:138] Found kubelet problem: Oct 25 22:00:30 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:30.571064 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:00:50.806204 112102 logs.go:138] Found kubelet problem: Oct 25 22:00:31 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:31.572055 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:00:50.808112 112102 logs.go:138] Found kubelet problem: Oct 25 22:00:32 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:32.570796 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:00:50.831673 112102 logs.go:138] Found kubelet problem: Oct 25 22:00:47 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:47.751362 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:00:50.833912 112102 logs.go:138] Found kubelet problem: Oct 25 22:00:48 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:48.773179 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:00:50.837162 112102 logs.go:138] Found kubelet problem: Oct 25 22:00:50 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:50.655030 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
I1025 22:00:50.837482 112102 logs.go:123] Gathering logs for etcd [351d1be3fc41] ...
I1025 22:00:50.837504 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351d1be3fc41"
I1025 22:00:50.875462 112102 logs.go:123] Gathering logs for etcd [111a4f5088ac] ...
I1025 22:00:50.875491 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 111a4f5088ac"
I1025 22:00:50.916189 112102 logs.go:123] Gathering logs for kube-controller-manager [53138481ecbd] ...
I1025 22:00:50.916241 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53138481ecbd"
I1025 22:00:50.963419 112102 out.go:309] Setting ErrFile to fd 2...
I1025 22:00:50.963445 112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
W1025 22:00:50.963497 112102 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W1025 22:00:50.963512 112102 out.go:239] Oct 25 22:00:31 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:31.572055 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 22:00:31 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:31.572055 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:00:50.963530 112102 out.go:239] Oct 25 22:00:32 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:32.570796 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 22:00:32 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:32.570796 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:00:50.963541 112102 out.go:239] Oct 25 22:00:47 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:47.751362 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 22:00:47 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:47.751362 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:00:50.963553 112102 out.go:239] Oct 25 22:00:48 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:48.773179 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 22:00:48 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:48.773179 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:00:50.963571 112102 out.go:239] Oct 25 22:00:50 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:50.655030 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 22:00:50 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:50.655030 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
I1025 22:00:50.963583 112102 out.go:309] Setting ErrFile to fd 2...
I1025 22:00:50.963596 112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 22:01:00.965228 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 22:01:00.965924 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 22:01:00.966013 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1025 22:01:00.999637 112102 logs.go:284] 1 containers: [064ea6f86a9c]
I1025 22:01:00.999729 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1025 22:01:01.032956 112102 logs.go:284] 2 containers: [351d1be3fc41 111a4f5088ac]
I1025 22:01:01.033045 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1025 22:01:01.063708 112102 logs.go:284] 0 containers: []
W1025 22:01:01.063735 112102 logs.go:286] No container was found matching "coredns"
I1025 22:01:01.063793 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1025 22:01:01.096877 112102 logs.go:284] 2 containers: [8464245274b1 09fabc795729]
I1025 22:01:01.096958 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1025 22:01:01.127378 112102 logs.go:284] 0 containers: []
W1025 22:01:01.127407 112102 logs.go:286] No container was found matching "kube-proxy"
I1025 22:01:01.127469 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1025 22:01:01.159108 112102 logs.go:284] 3 containers: [1dd89316adc1 16645aa4516e 53138481ecbd]
I1025 22:01:01.159202 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I1025 22:01:01.190113 112102 logs.go:284] 0 containers: []
W1025 22:01:01.190137 112102 logs.go:286] No container was found matching "kindnet"
I1025 22:01:01.190157 112102 logs.go:123] Gathering logs for container status ...
I1025 22:01:01.190169 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1025 22:01:01.207588 112102 logs.go:123] Gathering logs for kube-scheduler [8464245274b1] ...
I1025 22:01:01.207619 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8464245274b1"
I1025 22:01:01.306531 112102 logs.go:123] Gathering logs for kube-controller-manager [16645aa4516e] ...
I1025 22:01:01.306571 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16645aa4516e"
I1025 22:01:01.343259 112102 logs.go:123] Gathering logs for Docker ...
I1025 22:01:01.343292 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I1025 22:01:01.375391 112102 logs.go:123] Gathering logs for kube-controller-manager [1dd89316adc1] ...
I1025 22:01:01.375423 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dd89316adc1"
I1025 22:01:01.410399 112102 logs.go:123] Gathering logs for kube-controller-manager [53138481ecbd] ...
I1025 22:01:01.410427 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53138481ecbd"
I1025 22:01:01.450148 112102 logs.go:123] Gathering logs for kubelet ...
I1025 22:01:01.450179 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1025 22:01:01.481256 112102 logs.go:138] Found kubelet problem: Oct 25 22:00:47 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:47.751362 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:01:01.483506 112102 logs.go:138] Found kubelet problem: Oct 25 22:00:48 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:48.773179 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:01:01.486382 112102 logs.go:138] Found kubelet problem: Oct 25 22:00:50 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:50.655030 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:01:01.489918 112102 logs.go:138] Found kubelet problem: Oct 25 22:00:52 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:52.826034 9704 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
W1025 22:01:01.499230 112102 logs.go:138] Found kubelet problem: Oct 25 22:00:58 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:58.651724 9704 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
I1025 22:01:01.503823 112102 logs.go:123] Gathering logs for kube-apiserver [064ea6f86a9c] ...
I1025 22:01:01.503841 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064ea6f86a9c"
I1025 22:01:01.564756 112102 logs.go:123] Gathering logs for etcd [111a4f5088ac] ...
I1025 22:01:01.564790 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 111a4f5088ac"
I1025 22:01:01.605724 112102 logs.go:123] Gathering logs for describe nodes ...
I1025 22:01:01.605756 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1025 22:01:01.675136 112102 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
error: tls: private key does not match public key
output:
** stderr **
error: tls: private key does not match public key
** /stderr **
I1025 22:01:01.675157 112102 logs.go:123] Gathering logs for etcd [351d1be3fc41] ...
I1025 22:01:01.675168 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351d1be3fc41"
I1025 22:01:01.713573 112102 logs.go:123] Gathering logs for kube-scheduler [09fabc795729] ...
I1025 22:01:01.713612 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09fabc795729"
I1025 22:01:01.755029 112102 logs.go:123] Gathering logs for dmesg ...
I1025 22:01:01.755064 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1025 22:01:01.765173 112102 out.go:309] Setting ErrFile to fd 2...
I1025 22:01:01.765195 112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
W1025 22:01:01.765248 112102 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W1025 22:01:01.765260 112102 out.go:239] Oct 25 22:00:47 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:47.751362 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 22:00:47 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:47.751362 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:01:01.765268 112102 out.go:239] Oct 25 22:00:48 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:48.773179 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 22:00:48 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:48.773179 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:01:01.765277 112102 out.go:239] Oct 25 22:00:50 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:50.655030 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 22:00:50 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:50.655030 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:01:01.765283 112102 out.go:239] Oct 25 22:00:52 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:52.826034 9704 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
Oct 25 22:00:52 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:52.826034 9704 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
W1025 22:01:01.765293 112102 out.go:239] Oct 25 22:00:58 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:58.651724 9704 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
Oct 25 22:00:58 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:58.651724 9704 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
I1025 22:01:01.765302 112102 out.go:309] Setting ErrFile to fd 2...
I1025 22:01:01.765309 112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 22:01:11.766605 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 22:01:11.767261 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 22:01:11.767355 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1025 22:01:11.806406 112102 logs.go:284] 1 containers: [cdbdd0260197]
I1025 22:01:11.806508 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1025 22:01:11.839324 112102 logs.go:284] 2 containers: [351d1be3fc41 111a4f5088ac]
I1025 22:01:11.839395 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1025 22:01:11.869758 112102 logs.go:284] 0 containers: []
W1025 22:01:11.869780 112102 logs.go:286] No container was found matching "coredns"
I1025 22:01:11.869834 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1025 22:01:11.905099 112102 logs.go:284] 2 containers: [8464245274b1 09fabc795729]
I1025 22:01:11.905198 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1025 22:01:11.937409 112102 logs.go:284] 0 containers: []
W1025 22:01:11.937432 112102 logs.go:286] No container was found matching "kube-proxy"
I1025 22:01:11.937490 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1025 22:01:11.975032 112102 logs.go:284] 3 containers: [8573e3b0daef 1dd89316adc1 53138481ecbd]
I1025 22:01:11.975131 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I1025 22:01:12.016243 112102 logs.go:284] 0 containers: []
W1025 22:01:12.016266 112102 logs.go:286] No container was found matching "kindnet"
I1025 22:01:12.016282 112102 logs.go:123] Gathering logs for kubelet ...
I1025 22:01:12.016295 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1025 22:01:12.040154 112102 logs.go:138] Found kubelet problem: Oct 25 22:00:52 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:52.826034 9704 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
W1025 22:01:12.050137 112102 logs.go:138] Found kubelet problem: Oct 25 22:00:58 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:58.651724 9704 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
W1025 22:01:12.058461 112102 logs.go:138] Found kubelet problem: Oct 25 22:01:03 stopped-upgrade-634233 kubelet[9704]: E1025 22:01:03.329959 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:01:12.078581 112102 logs.go:138] Found kubelet problem: Oct 25 22:01:09 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:09.670566 11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
I1025 22:01:12.082448 112102 logs.go:123] Gathering logs for dmesg ...
I1025 22:01:12.082479 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1025 22:01:12.092465 112102 logs.go:123] Gathering logs for Docker ...
I1025 22:01:12.092502 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I1025 22:01:12.130925 112102 logs.go:123] Gathering logs for kube-controller-manager [1dd89316adc1] ...
I1025 22:01:12.130956 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dd89316adc1"
I1025 22:01:12.173734 112102 logs.go:123] Gathering logs for kube-controller-manager [53138481ecbd] ...
I1025 22:01:12.173772 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53138481ecbd"
I1025 22:01:12.227355 112102 logs.go:123] Gathering logs for container status ...
I1025 22:01:12.227397 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1025 22:01:12.253553 112102 logs.go:123] Gathering logs for kube-apiserver [cdbdd0260197] ...
I1025 22:01:12.253591 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbdd0260197"
I1025 22:01:12.314333 112102 logs.go:123] Gathering logs for kube-controller-manager [8573e3b0daef] ...
I1025 22:01:12.314370 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8573e3b0daef"
I1025 22:01:12.351226 112102 logs.go:123] Gathering logs for etcd [111a4f5088ac] ...
I1025 22:01:12.351260 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 111a4f5088ac"
I1025 22:01:12.401942 112102 logs.go:123] Gathering logs for kube-scheduler [8464245274b1] ...
I1025 22:01:12.401999 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8464245274b1"
I1025 22:01:12.504610 112102 logs.go:123] Gathering logs for kube-scheduler [09fabc795729] ...
I1025 22:01:12.504651 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09fabc795729"
I1025 22:01:12.545952 112102 logs.go:123] Gathering logs for describe nodes ...
I1025 22:01:12.545984 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1025 22:01:12.612533 112102 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
error: tls: private key does not match public key
output:
** stderr **
error: tls: private key does not match public key
** /stderr **
I1025 22:01:12.612562 112102 logs.go:123] Gathering logs for etcd [351d1be3fc41] ...
I1025 22:01:12.612576 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351d1be3fc41"
I1025 22:01:12.654706 112102 out.go:309] Setting ErrFile to fd 2...
I1025 22:01:12.654741 112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
W1025 22:01:12.654812 112102 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W1025 22:01:12.654838 112102 out.go:239] Oct 25 22:00:52 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:52.826034 9704 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
Oct 25 22:00:52 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:52.826034 9704 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
W1025 22:01:12.654852 112102 out.go:239] Oct 25 22:00:58 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:58.651724 9704 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
Oct 25 22:00:58 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:58.651724 9704 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
W1025 22:01:12.654864 112102 out.go:239] Oct 25 22:01:03 stopped-upgrade-634233 kubelet[9704]: E1025 22:01:03.329959 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 22:01:03 stopped-upgrade-634233 kubelet[9704]: E1025 22:01:03.329959 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:01:12.654876 112102 out.go:239] Oct 25 22:01:09 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:09.670566 11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 22:01:09 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:09.670566 11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
I1025 22:01:12.654890 112102 out.go:309] Setting ErrFile to fd 2...
I1025 22:01:12.654898 112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 22:01:22.655428 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 22:01:22.656090 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 22:01:22.656172 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1025 22:01:22.696713 112102 logs.go:284] 1 containers: [cdbdd0260197]
I1025 22:01:22.696810 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1025 22:01:22.738056 112102 logs.go:284] 2 containers: [351d1be3fc41 111a4f5088ac]
I1025 22:01:22.738139 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1025 22:01:22.769052 112102 logs.go:284] 0 containers: []
W1025 22:01:22.769075 112102 logs.go:286] No container was found matching "coredns"
I1025 22:01:22.769130 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1025 22:01:22.803051 112102 logs.go:284] 2 containers: [8464245274b1 09fabc795729]
I1025 22:01:22.803125 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1025 22:01:22.832579 112102 logs.go:284] 0 containers: []
W1025 22:01:22.832602 112102 logs.go:286] No container was found matching "kube-proxy"
I1025 22:01:22.832651 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1025 22:01:22.875795 112102 logs.go:284] 2 containers: [8573e3b0daef 53138481ecbd]
I1025 22:01:22.875897 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I1025 22:01:22.923740 112102 logs.go:284] 0 containers: []
W1025 22:01:22.923769 112102 logs.go:286] No container was found matching "kindnet"
I1025 22:01:22.923785 112102 logs.go:123] Gathering logs for kubelet ...
I1025 22:01:22.923830 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1025 22:01:22.942540 112102 logs.go:138] Found kubelet problem: Oct 25 22:01:03 stopped-upgrade-634233 kubelet[9704]: E1025 22:01:03.329959 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:01:22.961215 112102 logs.go:138] Found kubelet problem: Oct 25 22:01:09 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:09.670566 11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:01:22.971806 112102 logs.go:138] Found kubelet problem: Oct 25 22:01:16 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:16.588843 11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:01:22.976940 112102 logs.go:138] Found kubelet problem: Oct 25 22:01:19 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:19.713197 11558 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
W1025 22:01:22.981905 112102 logs.go:138] Found kubelet problem: Oct 25 22:01:22 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:22.906018 11558 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
I1025 22:01:22.981938 112102 logs.go:123] Gathering logs for kube-apiserver [cdbdd0260197] ...
I1025 22:01:22.981966 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbdd0260197"
I1025 22:01:23.039536 112102 logs.go:123] Gathering logs for etcd [111a4f5088ac] ...
I1025 22:01:23.039572 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 111a4f5088ac"
I1025 22:01:23.076076 112102 logs.go:123] Gathering logs for kube-scheduler [8464245274b1] ...
I1025 22:01:23.076109 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8464245274b1"
I1025 22:01:23.168032 112102 logs.go:123] Gathering logs for kube-controller-manager [53138481ecbd] ...
I1025 22:01:23.168080 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53138481ecbd"
I1025 22:01:23.218392 112102 logs.go:123] Gathering logs for dmesg ...
I1025 22:01:23.218432 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1025 22:01:23.229071 112102 logs.go:123] Gathering logs for describe nodes ...
I1025 22:01:23.229099 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1025 22:01:23.307057 112102 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
error: tls: private key does not match public key
output:
** stderr **
error: tls: private key does not match public key
** /stderr **
I1025 22:01:23.307078 112102 logs.go:123] Gathering logs for etcd [351d1be3fc41] ...
I1025 22:01:23.307089 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351d1be3fc41"
I1025 22:01:23.344132 112102 logs.go:123] Gathering logs for kube-scheduler [09fabc795729] ...
I1025 22:01:23.344164 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09fabc795729"
I1025 22:01:23.390409 112102 logs.go:123] Gathering logs for kube-controller-manager [8573e3b0daef] ...
I1025 22:01:23.390443 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8573e3b0daef"
I1025 22:01:23.425784 112102 logs.go:123] Gathering logs for Docker ...
I1025 22:01:23.425815 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I1025 22:01:23.456977 112102 logs.go:123] Gathering logs for container status ...
I1025 22:01:23.457016 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1025 22:01:23.478633 112102 out.go:309] Setting ErrFile to fd 2...
I1025 22:01:23.478666 112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
W1025 22:01:23.478728 112102 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W1025 22:01:23.478750 112102 out.go:239] Oct 25 22:01:03 stopped-upgrade-634233 kubelet[9704]: E1025 22:01:03.329959 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 22:01:03 stopped-upgrade-634233 kubelet[9704]: E1025 22:01:03.329959 9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:01:23.478764 112102 out.go:239] Oct 25 22:01:09 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:09.670566 11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 22:01:09 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:09.670566 11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:01:23.478784 112102 out.go:239] Oct 25 22:01:16 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:16.588843 11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 22:01:16 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:16.588843 11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:01:23.478799 112102 out.go:239] Oct 25 22:01:19 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:19.713197 11558 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
Oct 25 22:01:19 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:19.713197 11558 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
W1025 22:01:23.478809 112102 out.go:239] Oct 25 22:01:22 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:22.906018 11558 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
Oct 25 22:01:22 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:22.906018 11558 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
I1025 22:01:23.478822 112102 out.go:309] Setting ErrFile to fd 2...
I1025 22:01:23.478847 112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 22:01:33.479544 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 22:01:33.480143 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 22:01:33.480266 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1025 22:01:33.515976 112102 logs.go:284] 1 containers: [044bfb6e9ec8]
I1025 22:01:33.516063 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1025 22:01:33.553478 112102 logs.go:284] 2 containers: [351d1be3fc41 111a4f5088ac]
I1025 22:01:33.553546 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1025 22:01:33.588972 112102 logs.go:284] 0 containers: []
W1025 22:01:33.589000 112102 logs.go:286] No container was found matching "coredns"
I1025 22:01:33.589061 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1025 22:01:33.620791 112102 logs.go:284] 2 containers: [8464245274b1 09fabc795729]
I1025 22:01:33.620885 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1025 22:01:33.651720 112102 logs.go:284] 0 containers: []
W1025 22:01:33.651746 112102 logs.go:286] No container was found matching "kube-proxy"
I1025 22:01:33.651806 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1025 22:01:33.681915 112102 logs.go:284] 2 containers: [8573e3b0daef 53138481ecbd]
I1025 22:01:33.682004 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I1025 22:01:33.711270 112102 logs.go:284] 0 containers: []
W1025 22:01:33.711294 112102 logs.go:286] No container was found matching "kindnet"
I1025 22:01:33.711315 112102 logs.go:123] Gathering logs for etcd [351d1be3fc41] ...
I1025 22:01:33.711330 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351d1be3fc41"
I1025 22:01:33.756532 112102 logs.go:123] Gathering logs for etcd [111a4f5088ac] ...
I1025 22:01:33.756572 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 111a4f5088ac"
I1025 22:01:33.794191 112102 logs.go:123] Gathering logs for kube-scheduler [8464245274b1] ...
I1025 22:01:33.794223 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8464245274b1"
I1025 22:01:33.883530 112102 logs.go:123] Gathering logs for kube-scheduler [09fabc795729] ...
I1025 22:01:33.883565 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09fabc795729"
I1025 22:01:33.924964 112102 logs.go:123] Gathering logs for container status ...
I1025 22:01:33.924995 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1025 22:01:33.948257 112102 logs.go:123] Gathering logs for kubelet ...
I1025 22:01:33.948297 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1025 22:01:33.970345 112102 logs.go:138] Found kubelet problem: Oct 25 22:01:09 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:09.670566 11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:01:33.981122 112102 logs.go:138] Found kubelet problem: Oct 25 22:01:16 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:16.588843 11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:01:33.986481 112102 logs.go:138] Found kubelet problem: Oct 25 22:01:19 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:19.713197 11558 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
W1025 22:01:33.991378 112102 logs.go:138] Found kubelet problem: Oct 25 22:01:22 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:22.906018 11558 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
W1025 22:01:34.002630 112102 logs.go:138] Found kubelet problem: Oct 25 22:01:29 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:29.795238 11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
I1025 22:01:34.009073 112102 logs.go:123] Gathering logs for dmesg ...
I1025 22:01:34.009095 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1025 22:01:34.019581 112102 logs.go:123] Gathering logs for describe nodes ...
I1025 22:01:34.019606 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1025 22:01:34.089518 112102 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
error: tls: private key does not match public key
output:
** stderr **
error: tls: private key does not match public key
** /stderr **
I1025 22:01:34.089589 112102 logs.go:123] Gathering logs for Docker ...
I1025 22:01:34.089621 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I1025 22:01:34.126587 112102 logs.go:123] Gathering logs for kube-apiserver [044bfb6e9ec8] ...
I1025 22:01:34.126619 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 044bfb6e9ec8"
I1025 22:01:34.191804 112102 logs.go:123] Gathering logs for kube-controller-manager [8573e3b0daef] ...
I1025 22:01:34.191837 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8573e3b0daef"
I1025 22:01:34.230849 112102 logs.go:123] Gathering logs for kube-controller-manager [53138481ecbd] ...
I1025 22:01:34.230879 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53138481ecbd"
I1025 22:01:34.275305 112102 out.go:309] Setting ErrFile to fd 2...
I1025 22:01:34.275334 112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
W1025 22:01:34.275388 112102 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W1025 22:01:34.275399 112102 out.go:239] Oct 25 22:01:09 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:09.670566 11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 22:01:09 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:09.670566 11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:01:34.275406 112102 out.go:239] Oct 25 22:01:16 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:16.588843 11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 22:01:16 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:16.588843 11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:01:34.275420 112102 out.go:239] Oct 25 22:01:19 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:19.713197 11558 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
Oct 25 22:01:19 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:19.713197 11558 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
W1025 22:01:34.275434 112102 out.go:239] Oct 25 22:01:22 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:22.906018 11558 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
Oct 25 22:01:22 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:22.906018 11558 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
W1025 22:01:34.275446 112102 out.go:239] Oct 25 22:01:29 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:29.795238 11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 22:01:29 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:29.795238 11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
I1025 22:01:34.275456 112102 out.go:309] Setting ErrFile to fd 2...
I1025 22:01:34.275463 112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 22:01:44.277061 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 22:01:44.277743 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 22:01:44.277849 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1025 22:01:44.318353 112102 logs.go:284] 1 containers: [044bfb6e9ec8]
I1025 22:01:44.318452 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1025 22:01:44.357199 112102 logs.go:284] 2 containers: [351d1be3fc41 111a4f5088ac]
I1025 22:01:44.357290 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1025 22:01:44.397263 112102 logs.go:284] 0 containers: []
W1025 22:01:44.397291 112102 logs.go:286] No container was found matching "coredns"
I1025 22:01:44.397355 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1025 22:01:44.445538 112102 logs.go:284] 2 containers: [8464245274b1 09fabc795729]
I1025 22:01:44.445626 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1025 22:01:44.482527 112102 logs.go:284] 0 containers: []
W1025 22:01:44.482560 112102 logs.go:286] No container was found matching "kube-proxy"
I1025 22:01:44.482619 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1025 22:01:44.519526 112102 logs.go:284] 3 containers: [56aa01cc7db9 8573e3b0daef 53138481ecbd]
I1025 22:01:44.519629 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I1025 22:01:44.558628 112102 logs.go:284] 0 containers: []
W1025 22:01:44.558660 112102 logs.go:286] No container was found matching "kindnet"
I1025 22:01:44.558684 112102 logs.go:123] Gathering logs for kubelet ...
I1025 22:01:44.558703 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1025 22:01:44.580361 112102 logs.go:138] Found kubelet problem: Oct 25 22:01:19 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:19.713197 11558 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
W1025 22:01:44.585406 112102 logs.go:138] Found kubelet problem: Oct 25 22:01:22 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:22.906018 11558 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
W1025 22:01:44.597052 112102 logs.go:138] Found kubelet problem: Oct 25 22:01:29 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:29.795238 11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:01:44.608057 112102 logs.go:138] Found kubelet problem: Oct 25 22:01:36 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:36.595537 11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
I1025 22:01:44.620251 112102 logs.go:123] Gathering logs for describe nodes ...
I1025 22:01:44.620282 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1025 22:01:44.703606 112102 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
error: tls: private key does not match public key
output:
** stderr **
error: tls: private key does not match public key
** /stderr **
I1025 22:01:44.703633 112102 logs.go:123] Gathering logs for etcd [111a4f5088ac] ...
I1025 22:01:44.703648 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 111a4f5088ac"
I1025 22:01:44.750857 112102 logs.go:123] Gathering logs for kube-controller-manager [56aa01cc7db9] ...
I1025 22:01:44.750903 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56aa01cc7db9"
I1025 22:01:44.791420 112102 logs.go:123] Gathering logs for container status ...
I1025 22:01:44.791462 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1025 22:01:44.815681 112102 logs.go:123] Gathering logs for kube-apiserver [044bfb6e9ec8] ...
I1025 22:01:44.815718 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 044bfb6e9ec8"
I1025 22:01:44.875475 112102 logs.go:123] Gathering logs for kube-scheduler [8464245274b1] ...
I1025 22:01:44.875521 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8464245274b1"
I1025 22:01:44.990427 112102 logs.go:123] Gathering logs for kube-scheduler [09fabc795729] ...
I1025 22:01:44.990563 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09fabc795729"
I1025 22:01:45.037024 112102 logs.go:123] Gathering logs for kube-controller-manager [53138481ecbd] ...
I1025 22:01:45.037060 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53138481ecbd"
I1025 22:01:45.079783 112102 logs.go:123] Gathering logs for Docker ...
I1025 22:01:45.079819 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I1025 22:01:45.125339 112102 logs.go:123] Gathering logs for dmesg ...
I1025 22:01:45.125385 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1025 22:01:45.136253 112102 logs.go:123] Gathering logs for etcd [351d1be3fc41] ...
I1025 22:01:45.136288 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351d1be3fc41"
I1025 22:01:45.183501 112102 logs.go:123] Gathering logs for kube-controller-manager [8573e3b0daef] ...
I1025 22:01:45.183534 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8573e3b0daef"
I1025 22:01:45.225415 112102 out.go:309] Setting ErrFile to fd 2...
I1025 22:01:45.225452 112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
W1025 22:01:45.225524 112102 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W1025 22:01:45.225540 112102 out.go:239] Oct 25 22:01:19 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:19.713197 11558 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
Oct 25 22:01:19 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:19.713197 11558 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
W1025 22:01:45.225555 112102 out.go:239] Oct 25 22:01:22 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:22.906018 11558 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
Oct 25 22:01:22 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:22.906018 11558 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
W1025 22:01:45.225568 112102 out.go:239] Oct 25 22:01:29 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:29.795238 11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 22:01:29 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:29.795238 11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:01:45.225580 112102 out.go:239] Oct 25 22:01:36 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:36.595537 11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 22:01:36 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:36.595537 11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
I1025 22:01:45.225592 112102 out.go:309] Setting ErrFile to fd 2...
I1025 22:01:45.225604 112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 22:01:55.226039 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 22:01:55.226699 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 22:01:55.226782 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1025 22:01:55.265097 112102 logs.go:284] 1 containers: [bb666bf92cd4]
I1025 22:01:55.265207 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1025 22:01:55.305624 112102 logs.go:284] 2 containers: [351d1be3fc41 111a4f5088ac]
I1025 22:01:55.305766 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1025 22:01:55.345165 112102 logs.go:284] 0 containers: []
W1025 22:01:55.345194 112102 logs.go:286] No container was found matching "coredns"
I1025 22:01:55.345246 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1025 22:01:55.376667 112102 logs.go:284] 2 containers: [8464245274b1 09fabc795729]
I1025 22:01:55.376753 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1025 22:01:55.413204 112102 logs.go:284] 0 containers: []
W1025 22:01:55.413231 112102 logs.go:286] No container was found matching "kube-proxy"
I1025 22:01:55.413290 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1025 22:01:55.448549 112102 logs.go:284] 3 containers: [9b087fb968e7 56aa01cc7db9 53138481ecbd]
I1025 22:01:55.448652 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I1025 22:01:55.483400 112102 logs.go:284] 0 containers: []
W1025 22:01:55.483432 112102 logs.go:286] No container was found matching "kindnet"
I1025 22:01:55.483446 112102 logs.go:123] Gathering logs for kubelet ...
I1025 22:01:55.483458 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1025 22:01:55.507983 112102 logs.go:138] Found kubelet problem: Oct 25 22:01:36 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:36.595537 11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:01:55.539182 112102 logs.go:138] Found kubelet problem: Oct 25 22:01:49 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:49.000691 13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:01:55.540786 112102 logs.go:138] Found kubelet problem: Oct 25 22:01:49 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:49.948569 13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:01:55.549420 112102 logs.go:138] Found kubelet problem: Oct 25 22:01:55 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:55.321003 13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
I1025 22:01:55.549780 112102 logs.go:123] Gathering logs for kube-apiserver [bb666bf92cd4] ...
I1025 22:01:55.549801 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb666bf92cd4"
I1025 22:01:55.626600 112102 logs.go:123] Gathering logs for kube-controller-manager [53138481ecbd] ...
I1025 22:01:55.626636 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53138481ecbd"
I1025 22:01:55.679910 112102 logs.go:123] Gathering logs for Docker ...
I1025 22:01:55.679942 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I1025 22:01:55.716288 112102 logs.go:123] Gathering logs for dmesg ...
I1025 22:01:55.716321 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1025 22:01:55.728024 112102 logs.go:123] Gathering logs for describe nodes ...
I1025 22:01:55.728053 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1025 22:01:55.798910 112102 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
error: tls: private key does not match public key
output:
** stderr **
error: tls: private key does not match public key
** /stderr **
I1025 22:01:55.798934 112102 logs.go:123] Gathering logs for kube-scheduler [09fabc795729] ...
I1025 22:01:55.798948 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09fabc795729"
I1025 22:01:55.849112 112102 logs.go:123] Gathering logs for kube-controller-manager [56aa01cc7db9] ...
I1025 22:01:55.849152 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56aa01cc7db9"
I1025 22:01:55.897138 112102 logs.go:123] Gathering logs for container status ...
I1025 22:01:55.897166 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1025 22:01:55.921607 112102 logs.go:123] Gathering logs for etcd [351d1be3fc41] ...
I1025 22:01:55.921634 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351d1be3fc41"
I1025 22:01:55.973348 112102 logs.go:123] Gathering logs for etcd [111a4f5088ac] ...
I1025 22:01:55.973381 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 111a4f5088ac"
I1025 22:01:56.019485 112102 logs.go:123] Gathering logs for kube-scheduler [8464245274b1] ...
I1025 22:01:56.019521 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8464245274b1"
I1025 22:01:56.136053 112102 logs.go:123] Gathering logs for kube-controller-manager [9b087fb968e7] ...
I1025 22:01:56.136093 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b087fb968e7"
I1025 22:01:56.179509 112102 out.go:309] Setting ErrFile to fd 2...
I1025 22:01:56.179537 112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
W1025 22:01:56.179600 112102 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W1025 22:01:56.179615 112102 out.go:239] Oct 25 22:01:36 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:36.595537 11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 22:01:36 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:36.595537 11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:01:56.179627 112102 out.go:239] Oct 25 22:01:49 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:49.000691 13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 22:01:49 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:49.000691 13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:01:56.179640 112102 out.go:239] Oct 25 22:01:49 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:49.948569 13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 22:01:49 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:49.948569 13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:01:56.179653 112102 out.go:239] Oct 25 22:01:55 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:55.321003 13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 22:01:55 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:55.321003 13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
I1025 22:01:56.179663 112102 out.go:309] Setting ErrFile to fd 2...
I1025 22:01:56.179668 112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 22:02:06.181426 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 22:02:06.182140 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 22:02:06.182236 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1025 22:02:06.219515 112102 logs.go:284] 1 containers: [bb666bf92cd4]
I1025 22:02:06.219585 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1025 22:02:06.254804 112102 logs.go:284] 2 containers: [351d1be3fc41 111a4f5088ac]
I1025 22:02:06.254900 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1025 22:02:06.300067 112102 logs.go:284] 0 containers: []
W1025 22:02:06.300098 112102 logs.go:286] No container was found matching "coredns"
I1025 22:02:06.300163 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1025 22:02:06.334063 112102 logs.go:284] 2 containers: [8464245274b1 09fabc795729]
I1025 22:02:06.334142 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1025 22:02:06.366596 112102 logs.go:284] 0 containers: []
W1025 22:02:06.366620 112102 logs.go:286] No container was found matching "kube-proxy"
I1025 22:02:06.366677 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1025 22:02:06.405517 112102 logs.go:284] 2 containers: [9b087fb968e7 53138481ecbd]
I1025 22:02:06.405603 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I1025 22:02:06.440097 112102 logs.go:284] 0 containers: []
W1025 22:02:06.440121 112102 logs.go:286] No container was found matching "kindnet"
I1025 22:02:06.440135 112102 logs.go:123] Gathering logs for kubelet ...
I1025 22:02:06.440148 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1025 22:02:06.469811 112102 logs.go:138] Found kubelet problem: Oct 25 22:01:49 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:49.000691 13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:02:06.471374 112102 logs.go:138] Found kubelet problem: Oct 25 22:01:49 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:49.948569 13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:02:06.479721 112102 logs.go:138] Found kubelet problem: Oct 25 22:01:55 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:55.321003 13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:02:06.487512 112102 logs.go:138] Found kubelet problem: Oct 25 22:02:00 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:00.010327 13366 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
W1025 22:02:06.490473 112102 logs.go:138] Found kubelet problem: Oct 25 22:02:02 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:02.089671 13366 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
I1025 22:02:06.497508 112102 logs.go:123] Gathering logs for kube-apiserver [bb666bf92cd4] ...
I1025 22:02:06.497533 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb666bf92cd4"
I1025 22:02:06.556489 112102 logs.go:123] Gathering logs for kube-controller-manager [53138481ecbd] ...
I1025 22:02:06.556522 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53138481ecbd"
I1025 22:02:06.612058 112102 logs.go:123] Gathering logs for kube-scheduler [09fabc795729] ...
I1025 22:02:06.612088 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09fabc795729"
I1025 22:02:06.664901 112102 logs.go:123] Gathering logs for kube-controller-manager [9b087fb968e7] ...
I1025 22:02:06.664936 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b087fb968e7"
I1025 22:02:06.712162 112102 logs.go:123] Gathering logs for Docker ...
I1025 22:02:06.712198 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I1025 22:02:06.760167 112102 logs.go:123] Gathering logs for dmesg ...
I1025 22:02:06.760205 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1025 22:02:06.772085 112102 logs.go:123] Gathering logs for describe nodes ...
I1025 22:02:06.772114 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1025 22:02:06.842049 112102 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
error: tls: private key does not match public key
output:
** stderr **
error: tls: private key does not match public key
** /stderr **
I1025 22:02:06.842077 112102 logs.go:123] Gathering logs for etcd [351d1be3fc41] ...
I1025 22:02:06.842092 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351d1be3fc41"
I1025 22:02:06.881853 112102 logs.go:123] Gathering logs for etcd [111a4f5088ac] ...
I1025 22:02:06.881884 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 111a4f5088ac"
I1025 22:02:06.925916 112102 logs.go:123] Gathering logs for kube-scheduler [8464245274b1] ...
I1025 22:02:06.925956 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8464245274b1"
I1025 22:02:07.036722 112102 logs.go:123] Gathering logs for container status ...
I1025 22:02:07.036761 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1025 22:02:07.065295 112102 out.go:309] Setting ErrFile to fd 2...
I1025 22:02:07.065322 112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
W1025 22:02:07.065378 112102 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W1025 22:02:07.065396 112102 out.go:239] Oct 25 22:01:49 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:49.000691 13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 22:01:49 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:49.000691 13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:02:07.065410 112102 out.go:239] Oct 25 22:01:49 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:49.948569 13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 22:01:49 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:49.948569 13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:02:07.065421 112102 out.go:239] Oct 25 22:01:55 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:55.321003 13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 22:01:55 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:55.321003 13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:02:07.065433 112102 out.go:239] Oct 25 22:02:00 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:00.010327 13366 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
Oct 25 22:02:00 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:00.010327 13366 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
W1025 22:02:07.065443 112102 out.go:239] Oct 25 22:02:02 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:02.089671 13366 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
Oct 25 22:02:02 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:02.089671 13366 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
I1025 22:02:07.065457 112102 out.go:309] Setting ErrFile to fd 2...
I1025 22:02:07.065464 112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 22:02:17.066237 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 22:02:17.066866 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 22:02:17.066969 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1025 22:02:17.103308 112102 logs.go:284] 1 containers: [d5226f967430]
I1025 22:02:17.103379 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1025 22:02:17.143531 112102 logs.go:284] 2 containers: [351d1be3fc41 111a4f5088ac]
I1025 22:02:17.143611 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1025 22:02:17.176121 112102 logs.go:284] 0 containers: []
W1025 22:02:17.176151 112102 logs.go:286] No container was found matching "coredns"
I1025 22:02:17.176210 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1025 22:02:17.208049 112102 logs.go:284] 2 containers: [8464245274b1 09fabc795729]
I1025 22:02:17.208120 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1025 22:02:17.241165 112102 logs.go:284] 0 containers: []
W1025 22:02:17.241188 112102 logs.go:286] No container was found matching "kube-proxy"
I1025 22:02:17.241245 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1025 22:02:17.273320 112102 logs.go:284] 3 containers: [3aa1487697c9 9b087fb968e7 53138481ecbd]
I1025 22:02:17.273412 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I1025 22:02:17.304394 112102 logs.go:284] 0 containers: []
W1025 22:02:17.304424 112102 logs.go:286] No container was found matching "kindnet"
I1025 22:02:17.304438 112102 logs.go:123] Gathering logs for etcd [351d1be3fc41] ...
I1025 22:02:17.304459 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351d1be3fc41"
I1025 22:02:17.346026 112102 logs.go:123] Gathering logs for etcd [111a4f5088ac] ...
I1025 22:02:17.346056 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 111a4f5088ac"
I1025 22:02:17.395824 112102 logs.go:123] Gathering logs for kube-controller-manager [3aa1487697c9] ...
I1025 22:02:17.395854 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa1487697c9"
I1025 22:02:17.433162 112102 logs.go:123] Gathering logs for kube-controller-manager [53138481ecbd] ...
I1025 22:02:17.433189 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53138481ecbd"
I1025 22:02:17.487559 112102 logs.go:123] Gathering logs for dmesg ...
I1025 22:02:17.487588 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1025 22:02:17.496717 112102 logs.go:123] Gathering logs for describe nodes ...
I1025 22:02:17.496746 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1025 22:02:17.572572 112102 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
error: tls: private key does not match public key
output:
** stderr **
error: tls: private key does not match public key
** /stderr **
I1025 22:02:17.572601 112102 logs.go:123] Gathering logs for kube-apiserver [d5226f967430] ...
I1025 22:02:17.572619 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5226f967430"
I1025 22:02:17.635850 112102 logs.go:123] Gathering logs for Docker ...
I1025 22:02:17.635880 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I1025 22:02:17.686147 112102 logs.go:123] Gathering logs for kubelet ...
I1025 22:02:17.686191 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1025 22:02:17.711837 112102 logs.go:138] Found kubelet problem: Oct 25 22:01:55 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:55.321003 13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:02:17.720308 112102 logs.go:138] Found kubelet problem: Oct 25 22:02:00 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:00.010327 13366 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
W1025 22:02:17.723325 112102 logs.go:138] Found kubelet problem: Oct 25 22:02:02 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:02.089671 13366 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
W1025 22:02:17.740341 112102 logs.go:138] Found kubelet problem: Oct 25 22:02:11 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:11.134483 13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:02:17.748070 112102 logs.go:138] Found kubelet problem: Oct 25 22:02:15 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:15.318641 13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
I1025 22:02:17.751751 112102 logs.go:123] Gathering logs for kube-scheduler [09fabc795729] ...
I1025 22:02:17.751778 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09fabc795729"
I1025 22:02:17.791355 112102 logs.go:123] Gathering logs for kube-controller-manager [9b087fb968e7] ...
I1025 22:02:17.791387 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b087fb968e7"
I1025 22:02:17.832757 112102 logs.go:123] Gathering logs for kube-scheduler [8464245274b1] ...
I1025 22:02:17.832783 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8464245274b1"
I1025 22:02:17.928565 112102 logs.go:123] Gathering logs for container status ...
I1025 22:02:17.928602 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1025 22:02:17.954915 112102 out.go:309] Setting ErrFile to fd 2...
I1025 22:02:17.954951 112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
W1025 22:02:17.955055 112102 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W1025 22:02:17.955074 112102 out.go:239] Oct 25 22:01:55 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:55.321003 13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 22:01:55 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:55.321003 13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:02:17.955089 112102 out.go:239] Oct 25 22:02:00 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:00.010327 13366 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
Oct 25 22:02:00 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:00.010327 13366 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
W1025 22:02:17.955100 112102 out.go:239] Oct 25 22:02:02 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:02.089671 13366 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
Oct 25 22:02:02 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:02.089671 13366 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
W1025 22:02:17.955109 112102 out.go:239] Oct 25 22:02:11 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:11.134483 13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 22:02:11 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:11.134483 13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:02:17.955119 112102 out.go:239] Oct 25 22:02:15 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:15.318641 13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 22:02:15 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:15.318641 13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
I1025 22:02:17.955128 112102 out.go:309] Setting ErrFile to fd 2...
I1025 22:02:17.955143 112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 22:02:27.956009 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 22:02:27.956817 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 22:02:27.956906 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1025 22:02:28.001663 112102 logs.go:284] 2 containers: [ef3e9f6dc565 d5226f967430]
I1025 22:02:28.001759 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1025 22:02:28.046954 112102 logs.go:284] 2 containers: [351d1be3fc41 111a4f5088ac]
I1025 22:02:28.047051 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1025 22:02:28.086101 112102 logs.go:284] 0 containers: []
W1025 22:02:28.086136 112102 logs.go:286] No container was found matching "coredns"
I1025 22:02:28.086204 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1025 22:02:28.127303 112102 logs.go:284] 2 containers: [8464245274b1 09fabc795729]
I1025 22:02:28.127387 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1025 22:02:28.160380 112102 logs.go:284] 0 containers: []
W1025 22:02:28.160405 112102 logs.go:286] No container was found matching "kube-proxy"
I1025 22:02:28.160474 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1025 22:02:28.192810 112102 logs.go:284] 3 containers: [040aa54dc9a4 3aa1487697c9 53138481ecbd]
I1025 22:02:28.192885 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I1025 22:02:28.228839 112102 logs.go:284] 0 containers: []
W1025 22:02:28.228875 112102 logs.go:286] No container was found matching "kindnet"
I1025 22:02:28.228900 112102 logs.go:123] Gathering logs for describe nodes ...
I1025 22:02:28.228928 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1025 22:02:28.301613 112102 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
error: tls: private key does not match public key
output:
** stderr **
error: tls: private key does not match public key
** /stderr **
I1025 22:02:28.301641 112102 logs.go:123] Gathering logs for kube-controller-manager [040aa54dc9a4] ...
I1025 22:02:28.301657 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040aa54dc9a4"
I1025 22:02:29.486716 112102 ssh_runner.go:235] Completed: /bin/bash -c "docker logs --tail 400 040aa54dc9a4": (1.185027554s)
I1025 22:02:29.486766 112102 logs.go:123] Gathering logs for kube-apiserver [d5226f967430] ...
I1025 22:02:29.486781 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5226f967430"
I1025 22:02:29.576744 112102 logs.go:123] Gathering logs for kubelet ...
I1025 22:02:29.576793 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1025 22:02:29.615552 112102 logs.go:138] Found kubelet problem: Oct 25 22:02:11 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:11.134483 13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:02:29.627911 112102 logs.go:138] Found kubelet problem: Oct 25 22:02:15 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:15.318641 13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
I1025 22:02:29.673769 112102 logs.go:123] Gathering logs for kube-scheduler [8464245274b1] ...
I1025 22:02:29.673809 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8464245274b1"
I1025 22:02:29.819358 112102 logs.go:123] Gathering logs for kube-controller-manager [3aa1487697c9] ...
I1025 22:02:29.819402 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa1487697c9"
I1025 22:02:29.877727 112102 logs.go:123] Gathering logs for kube-controller-manager [53138481ecbd] ...
I1025 22:02:29.877768 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53138481ecbd"
I1025 22:02:29.946841 112102 logs.go:123] Gathering logs for Docker ...
I1025 22:02:29.946881 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I1025 22:02:29.991082 112102 logs.go:123] Gathering logs for container status ...
I1025 22:02:29.991118 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1025 22:02:30.020196 112102 logs.go:123] Gathering logs for dmesg ...
I1025 22:02:30.020253 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1025 22:02:30.034535 112102 logs.go:123] Gathering logs for kube-apiserver [ef3e9f6dc565] ...
I1025 22:02:30.034574 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef3e9f6dc565"
I1025 22:02:30.132820 112102 logs.go:123] Gathering logs for etcd [351d1be3fc41] ...
I1025 22:02:30.132864 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351d1be3fc41"
I1025 22:02:30.199388 112102 logs.go:123] Gathering logs for etcd [111a4f5088ac] ...
I1025 22:02:30.199427 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 111a4f5088ac"
I1025 22:02:30.264634 112102 logs.go:123] Gathering logs for kube-scheduler [09fabc795729] ...
I1025 22:02:30.264673 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09fabc795729"
I1025 22:02:30.333043 112102 out.go:309] Setting ErrFile to fd 2...
I1025 22:02:30.333080 112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
W1025 22:02:30.333157 112102 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W1025 22:02:30.333171 112102 out.go:239] Oct 25 22:02:11 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:11.134483 13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 22:02:11 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:11.134483 13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:02:30.333182 112102 out.go:239] Oct 25 22:02:15 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:15.318641 13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
Oct 25 22:02:15 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:15.318641 13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
I1025 22:02:30.333194 112102 out.go:309] Setting ErrFile to fd 2...
I1025 22:02:30.333202 112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 22:02:40.334583 112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
I1025 22:02:40.335196 112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
I1025 22:02:40.335280 112102 kubeadm.go:640] restartCluster took 4m48.89592098s
W1025 22:02:40.335349 112102 out.go:239] ! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: context deadline exceeded
! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: context deadline exceeded
I1025 22:02:40.335381 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I1025 22:02:42.595221 112102 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (2.259817104s)
I1025 22:02:42.595290 112102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1025 22:02:42.604693 112102 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1025 22:02:42.610588 112102 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1025 22:02:42.617496 112102 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1025 22:02:42.617544 112102 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
I1025 22:02:42.675319 112102 kubeadm.go:322] [init] Using Kubernetes version: v1.17.0
I1025 22:02:42.675616 112102 kubeadm.go:322] [preflight] Running pre-flight checks
I1025 22:02:42.938560 112102 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I1025 22:02:42.938740 112102 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1025 22:02:42.938878 112102 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I1025 22:02:43.279301 112102 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1025 22:02:43.279475 112102 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1025 22:02:43.279531 112102 kubeadm.go:322] [kubelet-start] Starting the kubelet
I1025 22:02:43.372141 112102 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1025 22:02:43.374166 112102 out.go:204] - Generating certificates and keys ...
I1025 22:02:43.374288 112102 kubeadm.go:322] [certs] Using existing ca certificate authority
I1025 22:02:43.374378 112102 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I1025 22:02:43.374502 112102 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1025 22:02:43.374639 112102 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
I1025 22:02:43.374741 112102 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
I1025 22:02:43.374827 112102 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
I1025 22:02:43.374919 112102 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
I1025 22:02:43.375010 112102 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
I1025 22:02:43.375423 112102 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1025 22:02:43.376113 112102 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1025 22:02:43.376290 112102 kubeadm.go:322] [certs] Using the existing "sa" key
I1025 22:02:43.376489 112102 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1025 22:02:43.517496 112102 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I1025 22:02:43.670859 112102 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1025 22:02:43.917905 112102 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1025 22:02:44.164406 112102 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1025 22:02:44.165797 112102 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1025 22:02:44.167643 112102 out.go:204] - Booting up control plane ...
I1025 22:02:44.167786 112102 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1025 22:02:44.198124 112102 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1025 22:02:44.203077 112102 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1025 22:02:44.207416 112102 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1025 22:02:44.227131 112102 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I1025 22:03:24.229071 112102 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I1025 22:06:44.230969 112102 kubeadm.go:322]
I1025 22:06:44.231100 112102 kubeadm.go:322] Unfortunately, an error has occurred:
I1025 22:06:44.231263 112102 kubeadm.go:322] timed out waiting for the condition
I1025 22:06:44.231275 112102 kubeadm.go:322]
I1025 22:06:44.231315 112102 kubeadm.go:322] This error is likely caused by:
I1025 22:06:44.231381 112102 kubeadm.go:322] - The kubelet is not running
I1025 22:06:44.231533 112102 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1025 22:06:44.231558 112102 kubeadm.go:322]
I1025 22:06:44.231745 112102 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1025 22:06:44.231816 112102 kubeadm.go:322] - 'systemctl status kubelet'
I1025 22:06:44.231875 112102 kubeadm.go:322] - 'journalctl -xeu kubelet'
I1025 22:06:44.231884 112102 kubeadm.go:322]
I1025 22:06:44.232045 112102 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I1025 22:06:44.232186 112102 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
I1025 22:06:44.232328 112102 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I1025 22:06:44.232404 112102 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I1025 22:06:44.232549 112102 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I1025 22:06:44.232595 112102 kubeadm.go:322] - 'docker logs CONTAINERID'
I1025 22:06:44.233586 112102 kubeadm.go:322] W1025 22:02:42.671272 16516 validation.go:28] Cannot validate kube-proxy config - no validator is available
I1025 22:06:44.233768 112102 kubeadm.go:322] W1025 22:02:42.671533 16516 validation.go:28] Cannot validate kubelet config - no validator is available
I1025 22:06:44.234054 112102 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I1025 22:06:44.234202 112102 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1025 22:06:44.234373 112102 kubeadm.go:322] W1025 22:02:44.194971 16516 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I1025 22:06:44.234538 112102 kubeadm.go:322] W1025 22:02:44.199953 16516 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I1025 22:06:44.234661 112102 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I1025 22:06:44.234764 112102 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
W1025 22:06:44.234939 112102 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.17.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W1025 22:02:42.671272 16516 validation.go:28] Cannot validate kube-proxy config - no validator is available
W1025 22:02:42.671533 16516 validation.go:28] Cannot validate kubelet config - no validator is available
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W1025 22:02:44.194971 16516 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W1025 22:02:44.199953 16516 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.17.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W1025 22:02:42.671272 16516 validation.go:28] Cannot validate kube-proxy config - no validator is available
W1025 22:02:42.671533 16516 validation.go:28] Cannot validate kubelet config - no validator is available
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W1025 22:02:44.194971 16516 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W1025 22:02:44.199953 16516 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I1025 22:06:44.235040 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I1025 22:06:47.388987 112102 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (3.15391378s)
I1025 22:06:47.389066 112102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1025 22:06:47.404472 112102 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1025 22:06:47.423257 112102 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1025 22:06:47.423313 112102 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
I1025 22:06:47.495864 112102 kubeadm.go:322] [init] Using Kubernetes version: v1.17.0
I1025 22:06:47.496100 112102 kubeadm.go:322] [preflight] Running pre-flight checks
I1025 22:06:47.845007 112102 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I1025 22:06:47.845151 112102 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1025 22:06:47.845313 112102 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I1025 22:06:48.234336 112102 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1025 22:06:48.234462 112102 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1025 22:06:48.234512 112102 kubeadm.go:322] [kubelet-start] Starting the kubelet
I1025 22:06:48.371700 112102 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1025 22:06:48.374879 112102 out.go:204] - Generating certificates and keys ...
I1025 22:06:48.374996 112102 kubeadm.go:322] [certs] Using existing ca certificate authority
I1025 22:06:48.375078 112102 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I1025 22:06:48.375170 112102 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1025 22:06:48.375243 112102 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
I1025 22:06:48.375324 112102 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
I1025 22:06:48.375394 112102 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
I1025 22:06:48.375470 112102 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
I1025 22:06:48.375545 112102 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
I1025 22:06:48.375635 112102 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1025 22:06:48.375730 112102 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1025 22:06:48.375776 112102 kubeadm.go:322] [certs] Using the existing "sa" key
I1025 22:06:48.375847 112102 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1025 22:06:48.519858 112102 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I1025 22:06:48.846730 112102 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1025 22:06:49.149765 112102 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1025 22:06:49.206915 112102 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1025 22:06:49.207891 112102 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1025 22:06:49.210037 112102 out.go:204] - Booting up control plane ...
I1025 22:06:49.210180 112102 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1025 22:06:49.218015 112102 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1025 22:06:49.219336 112102 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1025 22:06:49.220194 112102 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1025 22:06:49.224332 112102 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I1025 22:07:29.226615 112102 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I1025 22:10:49.230747 112102 kubeadm.go:322]
I1025 22:10:49.230818 112102 kubeadm.go:322] Unfortunately, an error has occurred:
I1025 22:10:49.230870 112102 kubeadm.go:322] timed out waiting for the condition
I1025 22:10:49.230883 112102 kubeadm.go:322]
I1025 22:10:49.230933 112102 kubeadm.go:322] This error is likely caused by:
I1025 22:10:49.230980 112102 kubeadm.go:322] - The kubelet is not running
I1025 22:10:49.231121 112102 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1025 22:10:49.231142 112102 kubeadm.go:322]
I1025 22:10:49.231267 112102 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1025 22:10:49.231312 112102 kubeadm.go:322] - 'systemctl status kubelet'
I1025 22:10:49.231353 112102 kubeadm.go:322] - 'journalctl -xeu kubelet'
I1025 22:10:49.231365 112102 kubeadm.go:322]
I1025 22:10:49.231497 112102 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I1025 22:10:49.231614 112102 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
I1025 22:10:49.231714 112102 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I1025 22:10:49.231777 112102 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I1025 22:10:49.231874 112102 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I1025 22:10:49.231917 112102 kubeadm.go:322] - 'docker logs CONTAINERID'
I1025 22:10:49.234948 112102 kubeadm.go:322] W1025 22:06:47.490375 26648 validation.go:28] Cannot validate kubelet config - no validator is available
I1025 22:10:49.235094 112102 kubeadm.go:322] W1025 22:06:47.490532 26648 validation.go:28] Cannot validate kube-proxy config - no validator is available
I1025 22:10:49.235307 112102 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I1025 22:10:49.235455 112102 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1025 22:10:49.235607 112102 kubeadm.go:322] W1025 22:06:49.213271 26648 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I1025 22:10:49.235756 112102 kubeadm.go:322] W1025 22:06:49.214615 26648 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I1025 22:10:49.235867 112102 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I1025 22:10:49.235954 112102 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
I1025 22:10:49.239352 112102 kubeadm.go:406] StartCluster complete in 12m57.848026362s
I1025 22:10:49.239469 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1025 22:10:49.299018 112102 logs.go:284] 1 containers: [72913cce086f]
I1025 22:10:49.299094 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1025 22:10:49.348618 112102 logs.go:284] 1 containers: [05d86b5157b9]
I1025 22:10:49.348681 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1025 22:10:49.392720 112102 logs.go:284] 0 containers: []
W1025 22:10:49.392746 112102 logs.go:286] No container was found matching "coredns"
I1025 22:10:49.392806 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1025 22:10:49.458837 112102 logs.go:284] 1 containers: [4f8e9c9873a8]
I1025 22:10:49.458936 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1025 22:10:49.513201 112102 logs.go:284] 0 containers: []
W1025 22:10:49.513230 112102 logs.go:286] No container was found matching "kube-proxy"
I1025 22:10:49.513292 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1025 22:10:49.573557 112102 logs.go:284] 2 containers: [977de49f1ea1 4f97f88c4d42]
I1025 22:10:49.573653 112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I1025 22:10:49.621005 112102 logs.go:284] 0 containers: []
W1025 22:10:49.621028 112102 logs.go:286] No container was found matching "kindnet"
I1025 22:10:49.621046 112102 logs.go:123] Gathering logs for kube-controller-manager [977de49f1ea1] ...
I1025 22:10:49.621058 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 977de49f1ea1"
I1025 22:10:49.668886 112102 logs.go:123] Gathering logs for Docker ...
I1025 22:10:49.668926 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I1025 22:10:49.765973 112102 logs.go:123] Gathering logs for container status ...
I1025 22:10:49.766017 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1025 22:10:49.828172 112102 logs.go:123] Gathering logs for dmesg ...
I1025 22:10:49.828208 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1025 22:10:49.854730 112102 logs.go:123] Gathering logs for describe nodes ...
I1025 22:10:49.854765 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1025 22:10:49.965289 112102 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
error: tls: private key does not match public key
output:
** stderr **
error: tls: private key does not match public key
** /stderr **
I1025 22:10:49.965329 112102 logs.go:123] Gathering logs for kube-scheduler [4f8e9c9873a8] ...
I1025 22:10:49.965348 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f8e9c9873a8"
I1025 22:10:50.109694 112102 logs.go:123] Gathering logs for kube-controller-manager [4f97f88c4d42] ...
I1025 22:10:50.109737 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f97f88c4d42"
I1025 22:10:50.176695 112102 logs.go:123] Gathering logs for kubelet ...
I1025 22:10:50.176738 112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1025 22:10:50.216154 112102 logs.go:138] Found kubelet problem: Oct 25 22:10:32 stopped-upgrade-634233 kubelet[1836]: E1025 22:10:32.398652 1836 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:10:50.256525 112102 logs.go:138] Found kubelet problem: Oct 25 22:10:45 stopped-upgrade-634233 kubelet[3111]: E1025 22:10:45.309286 3111 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
W1025 22:10:50.259352 112102 logs.go:138] Found kubelet problem: Oct 25 22:10:46 stopped-upgrade-634233 kubelet[3111]: E1025 22:10:46.288468 3111 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
I1025 22:10:50.269189 112102 logs.go:123] Gathering logs for kube-apiserver [72913cce086f] ...
I1025 22:10:50.269249 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72913cce086f"
I1025 22:10:50.374720 112102 logs.go:123] Gathering logs for etcd [05d86b5157b9] ...
I1025 22:10:50.374759 112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d86b5157b9"
W1025 22:10:50.425714 112102 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.17.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W1025 22:06:47.490375 26648 validation.go:28] Cannot validate kubelet config - no validator is available
W1025 22:06:47.490532 26648 validation.go:28] Cannot validate kube-proxy config - no validator is available
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W1025 22:06:49.213271 26648 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W1025 22:06:49.214615 26648 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W1025 22:10:50.425783 112102 out.go:239] *
*
W1025 22:10:50.425856 112102 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.17.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W1025 22:06:47.490375 26648 validation.go:28] Cannot validate kubelet config - no validator is available
W1025 22:06:47.490532 26648 validation.go:28] Cannot validate kube-proxy config - no validator is available
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W1025 22:06:49.213271 26648 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W1025 22:06:49.214615 26648 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.17.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W1025 22:06:47.490375 26648 validation.go:28] Cannot validate kubelet config - no validator is available
W1025 22:06:47.490532 26648 validation.go:28] Cannot validate kube-proxy config - no validator is available
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W1025 22:06:49.213271 26648 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W1025 22:06:49.214615 26648 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W1025 22:10:50.425886 112102 out.go:239] *
*
W1025 22:10:50.427086 112102 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1025 22:10:50.429919 112102 out.go:177] X Problems detected in kubelet:
I1025 22:10:50.431268 112102 out.go:177] Oct 25 22:10:32 stopped-upgrade-634233 kubelet[1836]: E1025 22:10:32.398652 1836 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
I1025 22:10:50.432682 112102 out.go:177] Oct 25 22:10:45 stopped-upgrade-634233 kubelet[3111]: E1025 22:10:45.309286 3111 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
I1025 22:10:50.434174 112102 out.go:177] Oct 25 22:10:46 stopped-upgrade-634233 kubelet[3111]: E1025 22:10:46.288468 3111 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
I1025 22:10:50.437400 112102 out.go:177]
W1025 22:10:50.438825 112102 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.17.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W1025 22:06:47.490375 26648 validation.go:28] Cannot validate kubelet config - no validator is available
W1025 22:06:47.490532 26648 validation.go:28] Cannot validate kube-proxy config - no validator is available
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W1025 22:06:49.213271 26648 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W1025 22:06:49.214615 26648 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.17.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W1025 22:06:47.490375 26648 validation.go:28] Cannot validate kubelet config - no validator is available
W1025 22:06:47.490532 26648 validation.go:28] Cannot validate kube-proxy config - no validator is available
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W1025 22:06:49.213271 26648 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W1025 22:06:49.214615 26648 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W1025 22:10:50.438910 112102 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1025 22:10:50.438941 112102 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I1025 22:10:50.440484 112102 out.go:177]
** /stderr **
version_upgrade_test.go:213: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-634233 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : exit status 109
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (1037.86s)