=== RUN TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade
=== CONT TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run: /tmp/minikube-v1.22.0.965652295.exe start -p running-upgrade-502460 --memory=2200 --vm-driver=kvm2 --container-runtime=containerd
version_upgrade_test.go:132: (dbg) Done: /tmp/minikube-v1.22.0.965652295.exe start -p running-upgrade-502460 --memory=2200 --vm-driver=kvm2 --container-runtime=containerd: (2m6.784359759s)
version_upgrade_test.go:142: (dbg) Run: out/minikube-linux-amd64 start -p running-upgrade-502460 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 --container-runtime=containerd
version_upgrade_test.go:142: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-502460 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 --container-runtime=containerd: exit status 109 (12m56.705572979s)
-- stdout --
* [running-upgrade-502460] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=17086
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/17086-11104/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/17086-11104/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Kubernetes 1.28.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.0
* Using the kvm2 driver based on existing profile
* Starting control plane node running-upgrade-502460 in cluster running-upgrade-502460
* Updating the running kvm2 "running-upgrade-502460" VM ...
* Preparing Kubernetes v1.21.2 on containerd 1.4.4 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Generating certificates and keys ...
- Booting up control plane ...
-- /stdout --
** stderr **
I0823 19:01:59.559009 46108 out.go:296] Setting OutFile to fd 1 ...
I0823 19:01:59.559168 46108 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0823 19:01:59.559179 46108 out.go:309] Setting ErrFile to fd 2...
I0823 19:01:59.559187 46108 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0823 19:01:59.559473 46108 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17086-11104/.minikube/bin
I0823 19:01:59.560234 46108 out.go:303] Setting JSON to false
I0823 19:01:59.561552 46108 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":6264,"bootTime":1692811056,"procs":258,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0823 19:01:59.561630 46108 start.go:138] virtualization: kvm guest
I0823 19:01:59.564489 46108 out.go:177] * [running-upgrade-502460] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
I0823 19:01:59.566579 46108 out.go:177] - MINIKUBE_LOCATION=17086
I0823 19:01:59.566602 46108 notify.go:220] Checking for updates...
I0823 19:01:59.568185 46108 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0823 19:01:59.569745 46108 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/17086-11104/kubeconfig
I0823 19:01:59.571279 46108 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/17086-11104/.minikube
I0823 19:01:59.572661 46108 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0823 19:01:59.573977 46108 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0823 19:01:59.576548 46108 config.go:182] Loaded profile config "running-upgrade-502460": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
I0823 19:01:59.578399 46108 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0823 19:01:59.578458 46108 main.go:141] libmachine: Launching plugin server for driver kvm2
I0823 19:01:59.593536 46108 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39521
I0823 19:01:59.593988 46108 main.go:141] libmachine: () Calling .GetVersion
I0823 19:01:59.594600 46108 main.go:141] libmachine: Using API Version 1
I0823 19:01:59.594630 46108 main.go:141] libmachine: () Calling .SetConfigRaw
I0823 19:01:59.594973 46108 main.go:141] libmachine: () Calling .GetMachineName
I0823 19:01:59.595142 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .DriverName
I0823 19:01:59.597035 46108 out.go:177] * Kubernetes 1.28.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.0
I0823 19:01:59.598452 46108 driver.go:373] Setting default libvirt URI to qemu:///system
I0823 19:01:59.598879 46108 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0823 19:01:59.598931 46108 main.go:141] libmachine: Launching plugin server for driver kvm2
I0823 19:01:59.613924 46108 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35113
I0823 19:01:59.614273 46108 main.go:141] libmachine: () Calling .GetVersion
I0823 19:01:59.614877 46108 main.go:141] libmachine: Using API Version 1
I0823 19:01:59.614917 46108 main.go:141] libmachine: () Calling .SetConfigRaw
I0823 19:01:59.615252 46108 main.go:141] libmachine: () Calling .GetMachineName
I0823 19:01:59.615454 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .DriverName
I0823 19:01:59.656519 46108 out.go:177] * Using the kvm2 driver based on existing profile
I0823 19:01:59.657958 46108 start.go:298] selected driver: kvm2
I0823 19:01:59.657974 46108 start.go:902] validating driver "kvm2" against &{Name:running-upgrade-502460 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.22.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:running-upgrade
-502460 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.47 Port:8443 KubernetesVersion:v1.21.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
I0823 19:01:59.658091 46108 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0823 19:01:59.659010 46108 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0823 19:01:59.659139 46108 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17086-11104/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0823 19:01:59.674948 46108 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.31.2
I0823 19:01:59.675250 46108 cni.go:84] Creating CNI manager for ""
I0823 19:01:59.675264 46108 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0823 19:01:59.675273 46108 start_flags.go:319] config:
{Name:running-upgrade-502460 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.22.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:running-upgrade-502460 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.47 Port:8443 KubernetesVersion:v1.21.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0}
I0823 19:01:59.675424 46108 iso.go:125] acquiring lock: {Name:mk81cce7a5d7f5e981d80e681dab8a3ecaaface9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0823 19:01:59.677118 46108 out.go:177] * Starting control plane node running-upgrade-502460 in cluster running-upgrade-502460
I0823 19:01:59.678407 46108 preload.go:132] Checking if preload exists for k8s version v1.21.2 and runtime containerd
I0823 19:01:59.678447 46108 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17086-11104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-amd64.tar.lz4
I0823 19:01:59.678469 46108 cache.go:57] Caching tarball of preloaded images
I0823 19:01:59.678570 46108 preload.go:174] Found /home/jenkins/minikube-integration/17086-11104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0823 19:01:59.678587 46108 cache.go:60] Finished verifying existence of preloaded tar for v1.21.2 on containerd
I0823 19:01:59.678735 46108 profile.go:148] Saving config to /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/running-upgrade-502460/config.json ...
I0823 19:01:59.678913 46108 start.go:365] acquiring machines lock for running-upgrade-502460: {Name:mk1833667e1e194459e10edb6eaddedbcc5a0864 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0823 19:02:09.126694 46108 start.go:369] acquired machines lock for "running-upgrade-502460" in 9.447741547s
I0823 19:02:09.126754 46108 start.go:96] Skipping create...Using existing machine configuration
I0823 19:02:09.126766 46108 fix.go:54] fixHost starting:
I0823 19:02:09.127167 46108 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0823 19:02:09.127200 46108 main.go:141] libmachine: Launching plugin server for driver kvm2
I0823 19:02:09.146641 46108 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39939
I0823 19:02:09.147102 46108 main.go:141] libmachine: () Calling .GetVersion
I0823 19:02:09.147642 46108 main.go:141] libmachine: Using API Version 1
I0823 19:02:09.147665 46108 main.go:141] libmachine: () Calling .SetConfigRaw
I0823 19:02:09.148028 46108 main.go:141] libmachine: () Calling .GetMachineName
I0823 19:02:09.148187 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .DriverName
I0823 19:02:09.148320 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetState
I0823 19:02:09.149968 46108 fix.go:102] recreateIfNeeded on running-upgrade-502460: state=Running err=<nil>
W0823 19:02:09.150005 46108 fix.go:128] unexpected machine state, will restart: <nil>
I0823 19:02:09.151742 46108 out.go:177] * Updating the running kvm2 "running-upgrade-502460" VM ...
I0823 19:02:09.153376 46108 machine.go:88] provisioning docker machine ...
I0823 19:02:09.153398 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .DriverName
I0823 19:02:09.153597 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetMachineName
I0823 19:02:09.153762 46108 buildroot.go:166] provisioning hostname "running-upgrade-502460"
I0823 19:02:09.153785 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetMachineName
I0823 19:02:09.153937 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHHostname
I0823 19:02:09.156271 46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
I0823 19:02:09.156684 46108 main.go:141] libmachine: (running-upgrade-502460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:1f:b8", ip: ""} in network mk-running-upgrade-502460: {Iface:virbr1 ExpiryTime:2023-08-23 20:00:47 +0000 UTC Type:0 Mac:52:54:00:a2:1f:b8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:running-upgrade-502460 Clientid:01:52:54:00:a2:1f:b8}
I0823 19:02:09.156722 46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined IP address 192.168.61.47 and MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
I0823 19:02:09.156859 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHPort
I0823 19:02:09.157024 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHKeyPath
I0823 19:02:09.157170 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHKeyPath
I0823 19:02:09.157281 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHUsername
I0823 19:02:09.157436 46108 main.go:141] libmachine: Using SSH client type: native
I0823 19:02:09.158184 46108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil> [] 0s} 192.168.61.47 22 <nil> <nil>}
I0823 19:02:09.158206 46108 main.go:141] libmachine: About to run SSH command:
sudo hostname running-upgrade-502460 && echo "running-upgrade-502460" | sudo tee /etc/hostname
I0823 19:02:09.280690 46108 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-502460
I0823 19:02:09.280711 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHHostname
I0823 19:02:09.283814 46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
I0823 19:02:09.284222 46108 main.go:141] libmachine: (running-upgrade-502460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:1f:b8", ip: ""} in network mk-running-upgrade-502460: {Iface:virbr1 ExpiryTime:2023-08-23 20:00:47 +0000 UTC Type:0 Mac:52:54:00:a2:1f:b8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:running-upgrade-502460 Clientid:01:52:54:00:a2:1f:b8}
I0823 19:02:09.284254 46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined IP address 192.168.61.47 and MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
I0823 19:02:09.284446 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHPort
I0823 19:02:09.284618 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHKeyPath
I0823 19:02:09.284756 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHKeyPath
I0823 19:02:09.284871 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHUsername
I0823 19:02:09.285058 46108 main.go:141] libmachine: Using SSH client type: native
I0823 19:02:09.285727 46108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil> [] 0s} 192.168.61.47 22 <nil> <nil>}
I0823 19:02:09.285755 46108 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\srunning-upgrade-502460' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-502460/g' /etc/hosts;
else
echo '127.0.1.1 running-upgrade-502460' | sudo tee -a /etc/hosts;
fi
fi
I0823 19:02:09.403737 46108 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0823 19:02:09.403759 46108 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17086-11104/.minikube CaCertPath:/home/jenkins/minikube-integration/17086-11104/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17086-11104/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17086-11104/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17086-11104/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17086-11104/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17086-11104/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17086-11104/.minikube}
I0823 19:02:09.403798 46108 buildroot.go:174] setting up certificates
I0823 19:02:09.403812 46108 provision.go:83] configureAuth start
I0823 19:02:09.403825 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetMachineName
I0823 19:02:09.404148 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetIP
I0823 19:02:09.407289 46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
I0823 19:02:09.407688 46108 main.go:141] libmachine: (running-upgrade-502460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:1f:b8", ip: ""} in network mk-running-upgrade-502460: {Iface:virbr1 ExpiryTime:2023-08-23 20:00:47 +0000 UTC Type:0 Mac:52:54:00:a2:1f:b8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:running-upgrade-502460 Clientid:01:52:54:00:a2:1f:b8}
I0823 19:02:09.407718 46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined IP address 192.168.61.47 and MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
I0823 19:02:09.407997 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHHostname
I0823 19:02:09.410663 46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
I0823 19:02:09.411103 46108 main.go:141] libmachine: (running-upgrade-502460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:1f:b8", ip: ""} in network mk-running-upgrade-502460: {Iface:virbr1 ExpiryTime:2023-08-23 20:00:47 +0000 UTC Type:0 Mac:52:54:00:a2:1f:b8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:running-upgrade-502460 Clientid:01:52:54:00:a2:1f:b8}
I0823 19:02:09.411135 46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined IP address 192.168.61.47 and MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
I0823 19:02:09.411273 46108 provision.go:138] copyHostCerts
I0823 19:02:09.411330 46108 exec_runner.go:144] found /home/jenkins/minikube-integration/17086-11104/.minikube/ca.pem, removing ...
I0823 19:02:09.411349 46108 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17086-11104/.minikube/ca.pem
I0823 19:02:09.411413 46108 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17086-11104/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17086-11104/.minikube/ca.pem (1078 bytes)
I0823 19:02:09.411513 46108 exec_runner.go:144] found /home/jenkins/minikube-integration/17086-11104/.minikube/cert.pem, removing ...
I0823 19:02:09.411523 46108 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17086-11104/.minikube/cert.pem
I0823 19:02:09.411553 46108 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17086-11104/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17086-11104/.minikube/cert.pem (1123 bytes)
I0823 19:02:09.411629 46108 exec_runner.go:144] found /home/jenkins/minikube-integration/17086-11104/.minikube/key.pem, removing ...
I0823 19:02:09.411641 46108 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17086-11104/.minikube/key.pem
I0823 19:02:09.411665 46108 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17086-11104/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17086-11104/.minikube/key.pem (1675 bytes)
I0823 19:02:09.411722 46108 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17086-11104/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17086-11104/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17086-11104/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-502460 san=[192.168.61.47 192.168.61.47 localhost 127.0.0.1 minikube running-upgrade-502460]
I0823 19:02:09.571903 46108 provision.go:172] copyRemoteCerts
I0823 19:02:09.571959 46108 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0823 19:02:09.571981 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHHostname
I0823 19:02:09.575284 46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
I0823 19:02:09.575729 46108 main.go:141] libmachine: (running-upgrade-502460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:1f:b8", ip: ""} in network mk-running-upgrade-502460: {Iface:virbr1 ExpiryTime:2023-08-23 20:00:47 +0000 UTC Type:0 Mac:52:54:00:a2:1f:b8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:running-upgrade-502460 Clientid:01:52:54:00:a2:1f:b8}
I0823 19:02:09.575777 46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined IP address 192.168.61.47 and MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
I0823 19:02:09.575989 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHPort
I0823 19:02:09.576182 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHKeyPath
I0823 19:02:09.576361 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHUsername
I0823 19:02:09.576514 46108 sshutil.go:53] new ssh client: &{IP:192.168.61.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17086-11104/.minikube/machines/running-upgrade-502460/id_rsa Username:docker}
I0823 19:02:09.677496 46108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
I0823 19:02:09.699884 46108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0823 19:02:09.722022 46108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0823 19:02:09.740667 46108 provision.go:86] duration metric: configureAuth took 336.842286ms
I0823 19:02:09.740693 46108 buildroot.go:189] setting minikube options for container-runtime
I0823 19:02:09.740926 46108 config.go:182] Loaded profile config "running-upgrade-502460": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
I0823 19:02:09.740941 46108 machine.go:91] provisioned docker machine in 587.553047ms
I0823 19:02:09.740949 46108 start.go:300] post-start starting for "running-upgrade-502460" (driver="kvm2")
I0823 19:02:09.740964 46108 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0823 19:02:09.740993 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .DriverName
I0823 19:02:09.741339 46108 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0823 19:02:09.741366 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHHostname
I0823 19:02:09.744605 46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
I0823 19:02:09.745027 46108 main.go:141] libmachine: (running-upgrade-502460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:1f:b8", ip: ""} in network mk-running-upgrade-502460: {Iface:virbr1 ExpiryTime:2023-08-23 20:00:47 +0000 UTC Type:0 Mac:52:54:00:a2:1f:b8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:running-upgrade-502460 Clientid:01:52:54:00:a2:1f:b8}
I0823 19:02:09.745072 46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined IP address 192.168.61.47 and MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
I0823 19:02:09.745341 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHPort
I0823 19:02:09.745557 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHKeyPath
I0823 19:02:09.745755 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHUsername
I0823 19:02:09.745918 46108 sshutil.go:53] new ssh client: &{IP:192.168.61.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17086-11104/.minikube/machines/running-upgrade-502460/id_rsa Username:docker}
I0823 19:02:09.839563 46108 ssh_runner.go:195] Run: cat /etc/os-release
I0823 19:02:09.844913 46108 info.go:137] Remote host: Buildroot 2020.02.12
I0823 19:02:09.844941 46108 filesync.go:126] Scanning /home/jenkins/minikube-integration/17086-11104/.minikube/addons for local assets ...
I0823 19:02:09.845035 46108 filesync.go:126] Scanning /home/jenkins/minikube-integration/17086-11104/.minikube/files for local assets ...
I0823 19:02:09.845134 46108 filesync.go:149] local asset: /home/jenkins/minikube-integration/17086-11104/.minikube/files/etc/ssl/certs/183722.pem -> 183722.pem in /etc/ssl/certs
I0823 19:02:09.845250 46108 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0823 19:02:09.853713 46108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/files/etc/ssl/certs/183722.pem --> /etc/ssl/certs/183722.pem (1708 bytes)
I0823 19:02:09.876199 46108 start.go:303] post-start completed in 135.236199ms
I0823 19:02:09.876226 46108 fix.go:56] fixHost completed within 749.461588ms
I0823 19:02:09.876252 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHHostname
I0823 19:02:09.878889 46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
I0823 19:02:09.879326 46108 main.go:141] libmachine: (running-upgrade-502460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:1f:b8", ip: ""} in network mk-running-upgrade-502460: {Iface:virbr1 ExpiryTime:2023-08-23 20:00:47 +0000 UTC Type:0 Mac:52:54:00:a2:1f:b8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:running-upgrade-502460 Clientid:01:52:54:00:a2:1f:b8}
I0823 19:02:09.879365 46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined IP address 192.168.61.47 and MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
I0823 19:02:09.879585 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHPort
I0823 19:02:09.879761 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHKeyPath
I0823 19:02:09.879970 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHKeyPath
I0823 19:02:09.880175 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHUsername
I0823 19:02:09.880407 46108 main.go:141] libmachine: Using SSH client type: native
I0823 19:02:09.880792 46108 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil> [] 0s} 192.168.61.47 22 <nil> <nil>}
I0823 19:02:09.880806 46108 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0823 19:02:10.002434 46108 main.go:141] libmachine: SSH cmd err, output: <nil>: 1692817329.999811244
I0823 19:02:10.002457 46108 fix.go:206] guest clock: 1692817329.999811244
I0823 19:02:10.002467 46108 fix.go:219] Guest: 2023-08-23 19:02:09.999811244 +0000 UTC Remote: 2023-08-23 19:02:09.876231253 +0000 UTC m=+10.361617869 (delta=123.579991ms)
I0823 19:02:10.002514 46108 fix.go:190] guest clock delta is within tolerance: 123.579991ms
I0823 19:02:10.002524 46108 start.go:83] releasing machines lock for "running-upgrade-502460", held for 875.807589ms
I0823 19:02:10.002553 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .DriverName
I0823 19:02:10.002822 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetIP
I0823 19:02:10.005630 46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
I0823 19:02:10.006011 46108 main.go:141] libmachine: (running-upgrade-502460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:1f:b8", ip: ""} in network mk-running-upgrade-502460: {Iface:virbr1 ExpiryTime:2023-08-23 20:00:47 +0000 UTC Type:0 Mac:52:54:00:a2:1f:b8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:running-upgrade-502460 Clientid:01:52:54:00:a2:1f:b8}
I0823 19:02:10.006066 46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined IP address 192.168.61.47 and MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
I0823 19:02:10.006256 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .DriverName
I0823 19:02:10.006804 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .DriverName
I0823 19:02:10.006982 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .DriverName
I0823 19:02:10.007076 46108 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0823 19:02:10.007136 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHHostname
I0823 19:02:10.007186 46108 ssh_runner.go:195] Run: cat /version.json
I0823 19:02:10.007215 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHHostname
I0823 19:02:10.010343 46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
I0823 19:02:10.010472 46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
I0823 19:02:10.010988 46108 main.go:141] libmachine: (running-upgrade-502460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:1f:b8", ip: ""} in network mk-running-upgrade-502460: {Iface:virbr1 ExpiryTime:2023-08-23 20:00:47 +0000 UTC Type:0 Mac:52:54:00:a2:1f:b8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:running-upgrade-502460 Clientid:01:52:54:00:a2:1f:b8}
I0823 19:02:10.011043 46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined IP address 192.168.61.47 and MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
I0823 19:02:10.011079 46108 main.go:141] libmachine: (running-upgrade-502460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:1f:b8", ip: ""} in network mk-running-upgrade-502460: {Iface:virbr1 ExpiryTime:2023-08-23 20:00:47 +0000 UTC Type:0 Mac:52:54:00:a2:1f:b8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:running-upgrade-502460 Clientid:01:52:54:00:a2:1f:b8}
I0823 19:02:10.011099 46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined IP address 192.168.61.47 and MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
I0823 19:02:10.011228 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHPort
I0823 19:02:10.011426 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHKeyPath
I0823 19:02:10.011468 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHPort
I0823 19:02:10.011569 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHKeyPath
I0823 19:02:10.011688 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHUsername
I0823 19:02:10.011697 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetSSHUsername
I0823 19:02:10.011896 46108 sshutil.go:53] new ssh client: &{IP:192.168.61.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17086-11104/.minikube/machines/running-upgrade-502460/id_rsa Username:docker}
I0823 19:02:10.012646 46108 sshutil.go:53] new ssh client: &{IP:192.168.61.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17086-11104/.minikube/machines/running-upgrade-502460/id_rsa Username:docker}
W0823 19:02:10.121853 46108 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
stdout:
stderr:
cat: /version.json: No such file or directory
I0823 19:02:10.121928 46108 ssh_runner.go:195] Run: systemctl --version
I0823 19:02:10.127729 46108 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0823 19:02:10.133774 46108 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0823 19:02:10.133855 46108 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0823 19:02:10.152518 46108 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0823 19:02:10.152557 46108 start.go:466] detecting cgroup driver to use...
I0823 19:02:10.152660 46108 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0823 19:02:10.177658 46108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0823 19:02:10.192068 46108 docker.go:196] disabling cri-docker service (if available) ...
I0823 19:02:10.192129 46108 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0823 19:02:10.201976 46108 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0823 19:02:10.231584 46108 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
W0823 19:02:10.248997 46108 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
stdout:
stderr:
Failed to disable unit: Unit file cri-docker.socket does not exist.
I0823 19:02:10.249116 46108 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0823 19:02:10.490697 46108 docker.go:212] disabling docker service ...
I0823 19:02:10.490764 46108 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0823 19:02:10.504028 46108 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0823 19:02:10.515835 46108 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0823 19:02:10.707732 46108 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0823 19:02:10.930920 46108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0823 19:02:10.958665 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0823 19:02:10.984997 46108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.4.1"|' /etc/containerd/config.toml"
I0823 19:02:11.001419 46108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0823 19:02:11.009827 46108 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0823 19:02:11.009882 46108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0823 19:02:11.018171 46108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0823 19:02:11.025065 46108 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0823 19:02:11.032516 46108 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0823 19:02:11.040957 46108 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0823 19:02:11.051305 46108 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0823 19:02:11.058329 46108 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0823 19:02:11.064752 46108 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0823 19:02:11.072395 46108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0823 19:02:11.231303 46108 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0823 19:02:11.267644 46108 start.go:513] Will wait 60s for socket path /run/containerd/containerd.sock
I0823 19:02:11.267731 46108 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0823 19:02:11.275085 46108 retry.go:31] will retry after 897.66326ms: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
I0823 19:02:12.172970 46108 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0823 19:02:12.178619 46108 retry.go:31] will retry after 1.167959927s: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
I0823 19:02:13.346903 46108 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0823 19:02:13.354663 46108 start.go:534] Will wait 60s for crictl version
I0823 19:02:13.354728 46108 ssh_runner.go:195] Run: which crictl
I0823 19:02:13.359400 46108 ssh_runner.go:195] Run: sudo /bin/crictl version
I0823 19:02:13.382614 46108 start.go:550] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.4.4
RuntimeApiVersion: v1alpha2
I0823 19:02:13.382683 46108 ssh_runner.go:195] Run: containerd --version
I0823 19:02:13.425358 46108 ssh_runner.go:195] Run: containerd --version
I0823 19:02:13.462177 46108 out.go:177] * Preparing Kubernetes v1.21.2 on containerd 1.4.4 ...
I0823 19:02:13.463371 46108 main.go:141] libmachine: (running-upgrade-502460) Calling .GetIP
I0823 19:02:13.466725 46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
I0823 19:02:13.467124 46108 main.go:141] libmachine: (running-upgrade-502460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:1f:b8", ip: ""} in network mk-running-upgrade-502460: {Iface:virbr1 ExpiryTime:2023-08-23 20:00:47 +0000 UTC Type:0 Mac:52:54:00:a2:1f:b8 Iaid: IPaddr:192.168.61.47 Prefix:24 Hostname:running-upgrade-502460 Clientid:01:52:54:00:a2:1f:b8}
I0823 19:02:13.467163 46108 main.go:141] libmachine: (running-upgrade-502460) DBG | domain running-upgrade-502460 has defined IP address 192.168.61.47 and MAC address 52:54:00:a2:1f:b8 in network mk-running-upgrade-502460
I0823 19:02:13.467522 46108 ssh_runner.go:195] Run: grep 192.168.61.1 host.minikube.internal$ /etc/hosts
I0823 19:02:13.473273 46108 preload.go:132] Checking if preload exists for k8s version v1.21.2 and runtime containerd
I0823 19:02:13.473348 46108 ssh_runner.go:195] Run: sudo crictl images --output json
I0823 19:02:13.498679 46108 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.21.2". assuming images are not preloaded.
I0823 19:02:13.498761 46108 ssh_runner.go:195] Run: which lz4
I0823 19:02:13.504924 46108 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I0823 19:02:13.511041 46108 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/preloaded.tar.lz4': No such file or directory
I0823 19:02:13.511077 46108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (483579245 bytes)
I0823 19:02:15.616813 46108 containerd.go:547] Took 2.111927 seconds to copy over tarball
I0823 19:02:15.616882 46108 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0823 19:02:19.667461 46108 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (4.05054972s)
I0823 19:02:19.667493 46108 containerd.go:554] Took 4.050658 seconds to extract the tarball
I0823 19:02:19.667501 46108 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0823 19:02:19.708329 46108 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0823 19:02:19.841945 46108 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0823 19:02:20.815014 46108 ssh_runner.go:195] Run: sudo crictl images --output json
I0823 19:02:21.838061 46108 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.023010088s)
I0823 19:02:21.838208 46108 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.21.2". assuming images are not preloaded.
I0823 19:02:21.838222 46108 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.21.2 registry.k8s.io/kube-controller-manager:v1.21.2 registry.k8s.io/kube-scheduler:v1.21.2 registry.k8s.io/kube-proxy:v1.21.2 registry.k8s.io/pause:3.4.1 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns/coredns:v1.8.0 gcr.io/k8s-minikube/storage-provisioner:v5]
I0823 19:02:21.838291 46108 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I0823 19:02:21.838321 46108 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.21.2
I0823 19:02:21.838344 46108 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.21.2
I0823 19:02:21.838354 46108 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
I0823 19:02:21.838504 46108 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.21.2
I0823 19:02:21.838531 46108 image.go:134] retrieving image: registry.k8s.io/pause:3.4.1
I0823 19:02:21.838541 46108 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.0
I0823 19:02:21.838557 46108 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.21.2
I0823 19:02:21.839915 46108 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I0823 19:02:21.839916 46108 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.21.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.21.2
I0823 19:02:21.839928 46108 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
I0823 19:02:21.840053 46108 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.21.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.21.2
I0823 19:02:21.840301 46108 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.0: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.0
I0823 19:02:21.840820 46108 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.21.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.21.2
I0823 19:02:21.841834 46108 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.21.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.21.2
I0823 19:02:21.841854 46108 image.go:177] daemon lookup for registry.k8s.io/pause:3.4.1: Error response from daemon: No such image: registry.k8s.io/pause:3.4.1
I0823 19:02:22.002403 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-apiserver:v1.21.2"
I0823 19:02:22.012881 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-proxy:v1.21.2"
I0823 19:02:22.028652 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.4.13-0"
I0823 19:02:22.039212 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.21.2"
I0823 19:02:22.043199 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/coredns/coredns:v1.8.0"
I0823 19:02:22.074439 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/pause:3.4.1"
I0823 19:02:22.088783 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.21.2"
I0823 19:02:22.349284 46108 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.21.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.21.2" does not exist at hash "106ff58d4308243e0042862435f5a0b14dd332d8151f17a739046c7df33c7ae6" in container runtime
I0823 19:02:22.349336 46108 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.21.2
I0823 19:02:22.349384 46108 ssh_runner.go:195] Run: which crictl
I0823 19:02:22.905469 46108 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.21.2" needs transfer: "registry.k8s.io/kube-proxy:v1.21.2" does not exist at hash "a6ebd1c1ad9810239a2885494ae92e0230224bafcb39ef1433c6cb49a98b0dfe" in container runtime
I0823 19:02:22.905519 46108 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.21.2
I0823 19:02:22.905595 46108 ssh_runner.go:195] Run: which crictl
I0823 19:02:23.145643 46108 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.4.13-0": (1.11695521s)
I0823 19:02:23.145695 46108 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
I0823 19:02:23.145722 46108 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
I0823 19:02:23.145766 46108 ssh_runner.go:195] Run: which crictl
I0823 19:02:23.234349 46108 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.21.2": (1.195103468s)
I0823 19:02:23.234395 46108 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.21.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.21.2" does not exist at hash "f917b8c8f55b7fd9bcd895920e2c16fb3e3770c94eba844262a57a55c6187d86" in container runtime
I0823 19:02:23.234425 46108 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.21.2
I0823 19:02:23.234475 46108 ssh_runner.go:195] Run: which crictl
I0823 19:02:23.331398 46108 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/coredns/coredns:v1.8.0": (1.288163527s)
I0823 19:02:23.331426 46108 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/pause:3.4.1": (1.256937619s)
I0823 19:02:23.331449 46108 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.0" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.0" does not exist at hash "296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899" in container runtime
I0823 19:02:23.331476 46108 cache_images.go:116] "registry.k8s.io/pause:3.4.1" needs transfer: "registry.k8s.io/pause:3.4.1" does not exist at hash "0f8457a4c2ecaceac160805013dc3c61c63a1ff3dee74a473a36249a748e0253" in container runtime
I0823 19:02:23.331483 46108 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.0
I0823 19:02:23.331508 46108 cri.go:218] Removing image: registry.k8s.io/pause:3.4.1
I0823 19:02:23.331531 46108 ssh_runner.go:195] Run: which crictl
I0823 19:02:23.331586 46108 ssh_runner.go:195] Run: which crictl
I0823 19:02:23.379526 46108 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.21.2": (1.290699202s)
I0823 19:02:23.379554 46108 ssh_runner.go:235] Completed: which crictl: (1.030147825s)
I0823 19:02:23.379581 46108 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.21.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.21.2" does not exist at hash "ae24db9aa2cc0d8572cc5c1c0eda9f40e0a8170cecefe742a5d7f1d4170f4eb1" in container runtime
I0823 19:02:23.379615 46108 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.21.2
I0823 19:02:23.379620 46108 ssh_runner.go:195] Run: sudo /bin/crictl rmi registry.k8s.io/kube-apiserver:v1.21.2
I0823 19:02:23.379659 46108 ssh_runner.go:195] Run: which crictl
I0823 19:02:23.379659 46108 ssh_runner.go:195] Run: sudo /bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
I0823 19:02:23.379691 46108 ssh_runner.go:195] Run: sudo /bin/crictl rmi registry.k8s.io/kube-scheduler:v1.21.2
I0823 19:02:23.379621 46108 ssh_runner.go:195] Run: sudo /bin/crictl rmi registry.k8s.io/kube-proxy:v1.21.2
I0823 19:02:23.379731 46108 ssh_runner.go:195] Run: sudo /bin/crictl rmi registry.k8s.io/pause:3.4.1
I0823 19:02:23.379763 46108 ssh_runner.go:195] Run: sudo /bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.0
I0823 19:02:23.469936 46108 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17086-11104/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.0
I0823 19:02:23.470000 46108 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17086-11104/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.21.2
I0823 19:02:23.470024 46108 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17086-11104/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.21.2
I0823 19:02:23.470086 46108 ssh_runner.go:195] Run: sudo /bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.21.2
I0823 19:02:23.470118 46108 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17086-11104/.minikube/cache/images/amd64/registry.k8s.io/pause_3.4.1
I0823 19:02:23.470188 46108 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17086-11104/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.21.2
I0823 19:02:23.470219 46108 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17086-11104/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
I0823 19:02:23.501599 46108 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17086-11104/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.21.2
I0823 19:02:23.753243 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
I0823 19:02:24.194118 46108 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
I0823 19:02:24.194172 46108 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
I0823 19:02:24.194219 46108 ssh_runner.go:195] Run: which crictl
I0823 19:02:24.203813 46108 ssh_runner.go:195] Run: sudo /bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I0823 19:02:24.415982 46108 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17086-11104/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
I0823 19:02:24.416108 46108 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
I0823 19:02:24.435470 46108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
I0823 19:02:24.654977 46108 containerd.go:269] Loading image: /var/lib/minikube/images/storage-provisioner_v5
I0823 19:02:24.655038 46108 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
I0823 19:02:26.165937 46108 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5: (1.510873877s)
I0823 19:02:26.165964 46108 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17086-11104/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
I0823 19:02:26.166002 46108 cache_images.go:92] LoadImages completed in 4.327770747s
W0823 19:02:26.166072 46108 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17086-11104/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.0: no such file or directory
X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17086-11104/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.0: no such file or directory
I0823 19:02:26.166136 46108 ssh_runner.go:195] Run: sudo crictl info
I0823 19:02:26.230797 46108 cni.go:84] Creating CNI manager for ""
I0823 19:02:26.230828 46108 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0823 19:02:26.230848 46108 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0823 19:02:26.230871 46108 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.47 APIServerPort:8443 KubernetesVersion:v1.21.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-502460 NodeName:running-upgrade-502460 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.47"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.47 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0823 19:02:26.231036 46108 kubeadm.go:181] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.61.47
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "running-upgrade-502460"
kubeletExtraArgs:
node-ip: 192.168.61.47
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.61.47"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.21.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0823 19:02:26.231130 46108 kubeadm.go:976] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.21.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=running-upgrade-502460 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.47
[Install]
config:
{KubernetesVersion:v1.21.2 ClusterName:running-upgrade-502460 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0823 19:02:26.231200 46108 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.21.2
I0823 19:02:26.257566 46108 binaries.go:44] Found k8s binaries, skipping transfer
I0823 19:02:26.257644 46108 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0823 19:02:26.277182 46108 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (443 bytes)
I0823 19:02:26.310748 46108 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0823 19:02:26.345712 46108 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2128 bytes)
I0823 19:02:26.371696 46108 ssh_runner.go:195] Run: grep 192.168.61.47 control-plane.minikube.internal$ /etc/hosts
I0823 19:02:26.385728 46108 certs.go:56] Setting up /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/running-upgrade-502460 for IP: 192.168.61.47
I0823 19:02:26.385769 46108 certs.go:190] acquiring lock for shared ca certs: {Name:mk306615e8137283da7a256d08e7c92ef0f9dd28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0823 19:02:26.385934 46108 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17086-11104/.minikube/ca.key
I0823 19:02:26.385996 46108 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17086-11104/.minikube/proxy-client-ca.key
I0823 19:02:26.386100 46108 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/running-upgrade-502460/client.key
I0823 19:02:26.386179 46108 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/running-upgrade-502460/apiserver.key.85e7fa4e
I0823 19:02:26.386250 46108 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/running-upgrade-502460/proxy-client.key
I0823 19:02:26.386401 46108 certs.go:437] found cert: /home/jenkins/minikube-integration/17086-11104/.minikube/certs/home/jenkins/minikube-integration/17086-11104/.minikube/certs/18372.pem (1338 bytes)
W0823 19:02:26.386460 46108 certs.go:433] ignoring /home/jenkins/minikube-integration/17086-11104/.minikube/certs/home/jenkins/minikube-integration/17086-11104/.minikube/certs/18372_empty.pem, impossibly tiny 0 bytes
I0823 19:02:26.386477 46108 certs.go:437] found cert: /home/jenkins/minikube-integration/17086-11104/.minikube/certs/home/jenkins/minikube-integration/17086-11104/.minikube/certs/ca-key.pem (1675 bytes)
I0823 19:02:26.386514 46108 certs.go:437] found cert: /home/jenkins/minikube-integration/17086-11104/.minikube/certs/home/jenkins/minikube-integration/17086-11104/.minikube/certs/ca.pem (1078 bytes)
I0823 19:02:26.386562 46108 certs.go:437] found cert: /home/jenkins/minikube-integration/17086-11104/.minikube/certs/home/jenkins/minikube-integration/17086-11104/.minikube/certs/cert.pem (1123 bytes)
I0823 19:02:26.386596 46108 certs.go:437] found cert: /home/jenkins/minikube-integration/17086-11104/.minikube/certs/home/jenkins/minikube-integration/17086-11104/.minikube/certs/key.pem (1675 bytes)
I0823 19:02:26.386650 46108 certs.go:437] found cert: /home/jenkins/minikube-integration/17086-11104/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17086-11104/.minikube/files/etc/ssl/certs/183722.pem (1708 bytes)
I0823 19:02:26.387300 46108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/running-upgrade-502460/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0823 19:02:26.454265 46108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/running-upgrade-502460/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0823 19:02:26.492287 46108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/running-upgrade-502460/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0823 19:02:26.553819 46108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/running-upgrade-502460/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0823 19:02:26.579785 46108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0823 19:02:26.613764 46108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0823 19:02:26.632598 46108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0823 19:02:26.668044 46108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
I0823 19:02:26.687571 46108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/files/etc/ssl/certs/183722.pem --> /usr/share/ca-certificates/183722.pem (1708 bytes)
I0823 19:02:26.706734 46108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0823 19:02:26.731144 46108 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17086-11104/.minikube/certs/18372.pem --> /usr/share/ca-certificates/18372.pem (1338 bytes)
I0823 19:02:26.751549 46108 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0823 19:02:26.768580 46108 ssh_runner.go:195] Run: openssl version
I0823 19:02:26.777149 46108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183722.pem && ln -fs /usr/share/ca-certificates/183722.pem /etc/ssl/certs/183722.pem"
I0823 19:02:26.796389 46108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183722.pem
I0823 19:02:26.803710 46108 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 23 18:20 /usr/share/ca-certificates/183722.pem
I0823 19:02:26.803760 46108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183722.pem
I0823 19:02:26.812888 46108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183722.pem /etc/ssl/certs/3ec20f2e.0"
I0823 19:02:26.828576 46108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0823 19:02:26.844331 46108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0823 19:02:26.859879 46108 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 23 18:14 /usr/share/ca-certificates/minikubeCA.pem
I0823 19:02:26.859938 46108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0823 19:02:26.879653 46108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0823 19:02:26.892331 46108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18372.pem && ln -fs /usr/share/ca-certificates/18372.pem /etc/ssl/certs/18372.pem"
I0823 19:02:26.912975 46108 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18372.pem
I0823 19:02:26.922612 46108 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 23 18:20 /usr/share/ca-certificates/18372.pem
I0823 19:02:26.922669 46108 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18372.pem
I0823 19:02:26.931699 46108 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18372.pem /etc/ssl/certs/51391683.0"
I0823 19:02:26.942427 46108 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
I0823 19:02:26.947953 46108 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0823 19:02:26.956823 46108 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0823 19:02:26.966249 46108 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0823 19:02:26.974865 46108 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0823 19:02:26.982698 46108 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0823 19:02:26.989275 46108 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0823 19:02:26.995927 46108 kubeadm.go:404] StartCluster: {Name:running-upgrade-502460 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:running-upgrad
e-502460 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.47 Port:8443 KubernetesVersion:v1.21.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false Disable
Metrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
I0823 19:02:26.996018 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0823 19:02:26.996063 46108 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0823 19:02:27.016716 46108 cri.go:89] found id: "4e4607254692d669a5fdb20163f69fcc84a9ed52628ec7e31eceb1666f2cca14"
I0823 19:02:27.016730 46108 cri.go:89] found id: "e63658b90ce2f6aab6592396765460d6c17c439581ff788a9dde3feda7f5b292"
I0823 19:02:27.016735 46108 cri.go:89] found id: "3afb9e6c80883dc3445b52ade523f03850fb45c3829360cb8ccf72f4e7da9603"
I0823 19:02:27.016738 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:02:27.016741 46108 cri.go:89] found id: ""
I0823 19:02:27.016782 46108 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
I0823 19:02:27.047809 46108 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"3afb9e6c80883dc3445b52ade523f03850fb45c3829360cb8ccf72f4e7da9603","pid":4636,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3afb9e6c80883dc3445b52ade523f03850fb45c3829360cb8ccf72f4e7da9603","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3afb9e6c80883dc3445b52ade523f03850fb45c3829360cb8ccf72f4e7da9603/rootfs","created":"2023-08-23T19:02:25.299486267Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"8a43d2a3c2e9e66fc4f8edab7461a1f664c608625d2dff2abb1efa25dcb17b45"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4e4607254692d669a5fdb20163f69fcc84a9ed52628ec7e31eceb1666f2cca14","pid":4744,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4e4607254692d669a5fdb20163f69fcc84a9ed52628ec7e31eceb1666f2cca14","rootfs":"/run/containerd/io.containerd.run
time.v2.task/k8s.io/4e4607254692d669a5fdb20163f69fcc84a9ed52628ec7e31eceb1666f2cca14/rootfs","created":"2023-08-23T19:02:26.819913613Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"a53af1138e6dd9c8a715e8ab19a3cacbb865e589d343d1dfd2bbca18e9cb950a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"59f034ed7c66da0a566bc29b0abd84a0df7f7654e148076141e5752242b1f3d8","pid":4426,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/59f034ed7c66da0a566bc29b0abd84a0df7f7654e148076141e5752242b1f3d8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/59f034ed7c66da0a566bc29b0abd84a0df7f7654e148076141e5752242b1f3d8/rootfs","created":"2023-08-23T19:02:23.848636081Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"59f034ed7c66da0a566bc29b0abd84a0df7f7654e148076141e5752242b1f3d8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kub
e-system_kube-scheduler-running-upgrade-502460_cef8b9b3c429b31bd63c3b57b52e975c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"825d1e863a23c72e2740eea50e47ccd9bc18c724c50852628115636bd07a8ffc","pid":4419,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/825d1e863a23c72e2740eea50e47ccd9bc18c724c50852628115636bd07a8ffc","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/825d1e863a23c72e2740eea50e47ccd9bc18c724c50852628115636bd07a8ffc/rootfs","created":"2023-08-23T19:02:23.852381438Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"825d1e863a23c72e2740eea50e47ccd9bc18c724c50852628115636bd07a8ffc","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-running-upgrade-502460_2c981615bb2d798c2adffe440f9b1774"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8a43d2a3c2e9e66fc4f8edab7461a1f664c608625d2dff2abb1efa25dcb17b45","pid":4526,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/
k8s.io/8a43d2a3c2e9e66fc4f8edab7461a1f664c608625d2dff2abb1efa25dcb17b45","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8a43d2a3c2e9e66fc4f8edab7461a1f664c608625d2dff2abb1efa25dcb17b45/rootfs","created":"2023-08-23T19:02:24.44108726Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"8a43d2a3c2e9e66fc4f8edab7461a1f664c608625d2dff2abb1efa25dcb17b45","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-running-upgrade-502460_98177f65ecff0fba7d65a15845b2e250"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a53af1138e6dd9c8a715e8ab19a3cacbb865e589d343d1dfd2bbca18e9cb950a","pid":4396,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a53af1138e6dd9c8a715e8ab19a3cacbb865e589d343d1dfd2bbca18e9cb950a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a53af1138e6dd9c8a715e8ab19a3cacbb865e589d343d1dfd2bbca18e9cb950a/rootfs","created":"2023-08-23T19:02:23.791324924Z","annotations":{"io.ku
bernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a53af1138e6dd9c8a715e8ab19a3cacbb865e589d343d1dfd2bbca18e9cb950a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_cfbb4f1b-ea68-4fb2-9ea5-2c900170cd7b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e","pid":4627,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e/rootfs","created":"2023-08-23T19:02:25.239549514Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"59f034ed7c66da0a566bc29b0abd84a0df7f7654e148076141e5752242b1f3d8"},"owner":"root"}]
I0823 19:02:27.047959 46108 cri.go:126] list returned 7 containers
I0823 19:02:27.047976 46108 cri.go:129] container: {ID:3afb9e6c80883dc3445b52ade523f03850fb45c3829360cb8ccf72f4e7da9603 Status:running}
I0823 19:02:27.047995 46108 cri.go:135] skipping {3afb9e6c80883dc3445b52ade523f03850fb45c3829360cb8ccf72f4e7da9603 running}: state = "running", want "paused"
I0823 19:02:27.048007 46108 cri.go:129] container: {ID:4e4607254692d669a5fdb20163f69fcc84a9ed52628ec7e31eceb1666f2cca14 Status:running}
I0823 19:02:27.048012 46108 cri.go:135] skipping {4e4607254692d669a5fdb20163f69fcc84a9ed52628ec7e31eceb1666f2cca14 running}: state = "running", want "paused"
I0823 19:02:27.048018 46108 cri.go:129] container: {ID:59f034ed7c66da0a566bc29b0abd84a0df7f7654e148076141e5752242b1f3d8 Status:running}
I0823 19:02:27.048026 46108 cri.go:131] skipping 59f034ed7c66da0a566bc29b0abd84a0df7f7654e148076141e5752242b1f3d8 - not in ps
I0823 19:02:27.048031 46108 cri.go:129] container: {ID:825d1e863a23c72e2740eea50e47ccd9bc18c724c50852628115636bd07a8ffc Status:running}
I0823 19:02:27.048036 46108 cri.go:131] skipping 825d1e863a23c72e2740eea50e47ccd9bc18c724c50852628115636bd07a8ffc - not in ps
I0823 19:02:27.048040 46108 cri.go:129] container: {ID:8a43d2a3c2e9e66fc4f8edab7461a1f664c608625d2dff2abb1efa25dcb17b45 Status:running}
I0823 19:02:27.048051 46108 cri.go:131] skipping 8a43d2a3c2e9e66fc4f8edab7461a1f664c608625d2dff2abb1efa25dcb17b45 - not in ps
I0823 19:02:27.048058 46108 cri.go:129] container: {ID:a53af1138e6dd9c8a715e8ab19a3cacbb865e589d343d1dfd2bbca18e9cb950a Status:running}
I0823 19:02:27.048071 46108 cri.go:131] skipping a53af1138e6dd9c8a715e8ab19a3cacbb865e589d343d1dfd2bbca18e9cb950a - not in ps
I0823 19:02:27.048081 46108 cri.go:129] container: {ID:abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e Status:running}
I0823 19:02:27.048090 46108 cri.go:135] skipping {abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e running}: state = "running", want "paused"
I0823 19:02:27.048141 46108 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0823 19:02:27.056549 46108 kubeadm.go:419] found existing configuration files, will attempt cluster restart
I0823 19:02:27.056564 46108 kubeadm.go:636] restartCluster start
I0823 19:02:27.056615 46108 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0823 19:02:27.065569 46108 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0823 19:02:27.066184 46108 kubeconfig.go:135] verify returned: extract IP: "running-upgrade-502460" does not appear in /home/jenkins/minikube-integration/17086-11104/kubeconfig
I0823 19:02:27.066512 46108 kubeconfig.go:146] "running-upgrade-502460" context is missing from /home/jenkins/minikube-integration/17086-11104/kubeconfig - will repair!
I0823 19:02:27.067042 46108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17086-11104/kubeconfig: {Name:mkb6ab3495f5663c5ba2bb1ce0b9748373e0a0b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0823 19:02:27.067885 46108 kapi.go:59] client config for running-upgrade-502460: &rest.Config{Host:"https://192.168.61.47:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17086-11104/.minikube/profiles/running-upgrade-502460/client.crt", KeyFile:"/home/jenkins/minikube-integration/17086-11104/.minikube/profiles/running-upgrade-502460/client.key", CAFile:"/home/jenkins/minikube-integration/17086-11104/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:
[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d61f20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0823 19:02:27.068766 46108 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0823 19:02:27.076116 46108 kubeadm.go:602] needs reconfigure: configs differ:
-- stdout --
--- /var/tmp/minikube/kubeadm.yaml
+++ /var/tmp/minikube/kubeadm.yaml.new
@@ -52,6 +52,8 @@
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
+hairpinMode: hairpin-veth
+runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
@@ -68,3 +70,7 @@
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
+# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
+ tcpEstablishedTimeout: 0s
+# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
+ tcpCloseWaitTimeout: 0s
-- /stdout --
I0823 19:02:27.076135 46108 kubeadm.go:1128] stopping kube-system containers ...
I0823 19:02:27.076146 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I0823 19:02:27.076193 46108 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0823 19:02:27.097890 46108 cri.go:89] found id: "4e4607254692d669a5fdb20163f69fcc84a9ed52628ec7e31eceb1666f2cca14"
I0823 19:02:27.097916 46108 cri.go:89] found id: "e63658b90ce2f6aab6592396765460d6c17c439581ff788a9dde3feda7f5b292"
I0823 19:02:27.097934 46108 cri.go:89] found id: "3afb9e6c80883dc3445b52ade523f03850fb45c3829360cb8ccf72f4e7da9603"
I0823 19:02:27.097940 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:02:27.097945 46108 cri.go:89] found id: ""
I0823 19:02:27.097951 46108 cri.go:234] Stopping containers: [4e4607254692d669a5fdb20163f69fcc84a9ed52628ec7e31eceb1666f2cca14 e63658b90ce2f6aab6592396765460d6c17c439581ff788a9dde3feda7f5b292 3afb9e6c80883dc3445b52ade523f03850fb45c3829360cb8ccf72f4e7da9603 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:02:27.098010 46108 ssh_runner.go:195] Run: which crictl
I0823 19:02:27.101961 46108 ssh_runner.go:195] Run: sudo /bin/crictl stop --timeout=10 4e4607254692d669a5fdb20163f69fcc84a9ed52628ec7e31eceb1666f2cca14 e63658b90ce2f6aab6592396765460d6c17c439581ff788a9dde3feda7f5b292 3afb9e6c80883dc3445b52ade523f03850fb45c3829360cb8ccf72f4e7da9603 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e
I0823 19:02:37.488386 46108 ssh_runner.go:235] Completed: sudo /bin/crictl stop --timeout=10 4e4607254692d669a5fdb20163f69fcc84a9ed52628ec7e31eceb1666f2cca14 e63658b90ce2f6aab6592396765460d6c17c439581ff788a9dde3feda7f5b292 3afb9e6c80883dc3445b52ade523f03850fb45c3829360cb8ccf72f4e7da9603 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e: (10.386376842s)
I0823 19:02:37.488473 46108 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0823 19:02:37.555491 46108 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0823 19:02:37.566508 46108 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5639 Aug 23 19:01 /etc/kubernetes/admin.conf
-rw------- 1 root root 5657 Aug 23 19:01 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 2027 Aug 23 19:01 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5601 Aug 23 19:01 /etc/kubernetes/scheduler.conf
I0823 19:02:37.566582 46108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0823 19:02:37.574689 46108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0823 19:02:37.583211 46108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0823 19:02:37.591289 46108 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I0823 19:02:37.591349 46108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0823 19:02:37.599844 46108 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0823 19:02:37.609984 46108 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I0823 19:02:37.610061 46108 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0823 19:02:37.619800 46108 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0823 19:02:37.631832 46108 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0823 19:02:37.631851 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0823 19:02:37.826333 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0823 19:02:39.040103 46108 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.213734593s)
I0823 19:02:39.040141 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0823 19:02:39.317183 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0823 19:02:39.443794 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0823 19:02:39.544977 46108 api_server.go:52] waiting for apiserver process to appear ...
I0823 19:02:39.545056 46108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0823 19:02:39.554961 46108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0823 19:02:40.067059 46108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0823 19:02:40.566917 46108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0823 19:02:41.067486 46108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0823 19:02:41.567526 46108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0823 19:02:42.067403 46108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0823 19:02:42.567664 46108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0823 19:02:43.067041 46108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0823 19:02:43.566942 46108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0823 19:02:44.067670 46108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0823 19:02:44.567600 46108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0823 19:02:45.067435 46108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0823 19:02:45.566735 46108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0823 19:02:46.066756 46108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0823 19:02:46.080206 46108 api_server.go:72] duration metric: took 6.535227462s to wait for apiserver process to appear ...
I0823 19:02:46.080229 46108 api_server.go:88] waiting for apiserver healthz status ...
I0823 19:02:46.080251 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:02:46.080694 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:02:46.080732 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:02:46.081104 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:02:46.581801 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:02:51.582161 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0823 19:02:51.582245 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:02:56.583338 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0823 19:02:56.583390 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:01.583946 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0823 19:03:01.583995 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:06.401118 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": read tcp 192.168.61.1:41362->192.168.61.47:8443: read: connection reset by peer
I0823 19:03:06.401161 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:06.401784 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:06.582150 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:06.582831 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:07.081468 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:07.082173 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:07.581776 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:07.582457 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:08.082118 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:08.082797 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:08.581328 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:08.581998 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:09.081532 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:09.082189 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:09.581363 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:09.691470 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:10.081556 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:10.082168 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:10.581783 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:10.582370 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:11.081989 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:11.082590 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:11.581962 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:11.582585 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:12.081906 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:12.082512 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:12.582144 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:12.582821 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:13.081170 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:13.081851 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:13.581384 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:13.582004 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:14.081571 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:14.082224 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:14.581528 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:14.582212 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:15.081859 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:15.082545 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:15.581855 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:15.582431 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:16.082055 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:16.082763 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:16.581292 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:16.581883 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:17.081705 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:17.082396 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:17.582003 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:17.582635 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:18.081194 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:18.081859 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:18.581388 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:18.582026 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:19.081528 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:19.082168 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:19.581320 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:19.581965 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:20.082014 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:20.082696 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:20.581224 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:20.581877 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:21.081279 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:21.081969 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:21.581478 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:21.582098 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:22.081664 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:22.082341 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:22.581932 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:22.582607 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:23.081175 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:23.081884 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:23.581236 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:23.581933 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:24.081337 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:24.081957 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:24.582191 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:24.582843 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:25.081258 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:25.081820 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:25.581364 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:25.581977 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:26.081514 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:26.082285 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:26.581887 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:26.582455 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:27.081348 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:27.082027 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:27.581464 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:27.582051 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:28.081601 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:28.082201 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:28.581867 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:28.582570 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:29.082227 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:29.082851 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:29.582168 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:34.583322 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0823 19:03:34.583362 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:39.583639 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0823 19:03:39.583689 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:44.584210 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0823 19:03:44.584258 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:49.585306 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0823 19:03:49.585368 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:03:49.585432 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:03:49.602749 46108 cri.go:89] found id: "abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328"
I0823 19:03:49.602776 46108 cri.go:89] found id: "8a376e3e7fe7865f487ae2bdcc042150b4a949e89a748198a5c587ebff728867"
I0823 19:03:49.602783 46108 cri.go:89] found id: ""
I0823 19:03:49.602791 46108 logs.go:284] 2 containers: [abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328 8a376e3e7fe7865f487ae2bdcc042150b4a949e89a748198a5c587ebff728867]
I0823 19:03:49.602847 46108 ssh_runner.go:195] Run: which crictl
I0823 19:03:49.607165 46108 ssh_runner.go:195] Run: which crictl
I0823 19:03:49.611706 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:03:49.611776 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:03:49.631507 46108 cri.go:89] found id: "0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0"
I0823 19:03:49.631528 46108 cri.go:89] found id: ""
I0823 19:03:49.631536 46108 logs.go:284] 1 containers: [0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0]
I0823 19:03:49.631591 46108 ssh_runner.go:195] Run: which crictl
I0823 19:03:49.636096 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:03:49.636150 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:03:49.652293 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:03:49.652316 46108 cri.go:89] found id: ""
I0823 19:03:49.652325 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:03:49.652397 46108 ssh_runner.go:195] Run: which crictl
I0823 19:03:49.656017 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:03:49.656083 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:03:49.672398 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:03:49.672427 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:03:49.672434 46108 cri.go:89] found id: ""
I0823 19:03:49.672443 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:03:49.672501 46108 ssh_runner.go:195] Run: which crictl
I0823 19:03:49.677411 46108 ssh_runner.go:195] Run: which crictl
I0823 19:03:49.681743 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:03:49.681797 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:03:49.706309 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:03:49.706334 46108 cri.go:89] found id: ""
I0823 19:03:49.706343 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:03:49.706404 46108 ssh_runner.go:195] Run: which crictl
I0823 19:03:49.710957 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:03:49.711012 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:03:49.736053 46108 cri.go:89] found id: "52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
I0823 19:03:49.736096 46108 cri.go:89] found id: "42930c6d0687e15d8f85f4f435153709e3f0a05840755d07a97438b10bdd31e4"
I0823 19:03:49.736104 46108 cri.go:89] found id: ""
I0823 19:03:49.736112 46108 logs.go:284] 2 containers: [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2 42930c6d0687e15d8f85f4f435153709e3f0a05840755d07a97438b10bdd31e4]
I0823 19:03:49.736157 46108 ssh_runner.go:195] Run: which crictl
I0823 19:03:49.741190 46108 ssh_runner.go:195] Run: which crictl
I0823 19:03:49.746922 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:03:49.746987 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:03:49.768045 46108 cri.go:89] found id: ""
I0823 19:03:49.768069 46108 logs.go:284] 0 containers: []
W0823 19:03:49.768077 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:03:49.768086 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:03:49.768146 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:03:49.807670 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:03:49.807694 46108 cri.go:89] found id: ""
I0823 19:03:49.807703 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:03:49.807759 46108 ssh_runner.go:195] Run: which crictl
I0823 19:03:49.813718 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:03:49.813751 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:03:49.826162 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:03:49.826190 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:03:49.868495 46108 logs.go:123] Gathering logs for kube-controller-manager [42930c6d0687e15d8f85f4f435153709e3f0a05840755d07a97438b10bdd31e4] ...
I0823 19:03:49.868527 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 42930c6d0687e15d8f85f4f435153709e3f0a05840755d07a97438b10bdd31e4"
I0823 19:03:49.911658 46108 logs.go:123] Gathering logs for container status ...
I0823 19:03:49.911706 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:03:49.941852 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:03:49.941896 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:03:49.964808 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:03:49.964838 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:03:49.990986 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:03:49.991016 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:03:50.020221 46108 logs.go:123] Gathering logs for kube-controller-manager [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2] ...
I0823 19:03:50.020254 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
I0823 19:03:50.038854 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:03:50.038884 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:03:50.099395 46108 logs.go:123] Gathering logs for kube-apiserver [abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328] ...
I0823 19:03:50.099433 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328"
I0823 19:03:50.121023 46108 logs.go:123] Gathering logs for etcd [0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0] ...
I0823 19:03:50.121052 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0"
I0823 19:03:50.138361 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:03:50.138386 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:03:50.293892 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:03:50.293920 46108 logs.go:123] Gathering logs for kube-apiserver [8a376e3e7fe7865f487ae2bdcc042150b4a949e89a748198a5c587ebff728867] ...
I0823 19:03:50.293934 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 8a376e3e7fe7865f487ae2bdcc042150b4a949e89a748198a5c587ebff728867"
W0823 19:03:50.314929 46108 logs.go:130] failed kube-apiserver [8a376e3e7fe7865f487ae2bdcc042150b4a949e89a748198a5c587ebff728867]: command: /bin/bash -c "sudo /bin/crictl logs --tail 400 8a376e3e7fe7865f487ae2bdcc042150b4a949e89a748198a5c587ebff728867" /bin/bash -c "sudo /bin/crictl logs --tail 400 8a376e3e7fe7865f487ae2bdcc042150b4a949e89a748198a5c587ebff728867": Process exited with status 1
stdout:
stderr:
E0823 19:03:50.311640 5852 remote_runtime.go:329] ContainerStatus "8a376e3e7fe7865f487ae2bdcc042150b4a949e89a748198a5c587ebff728867" from runtime service failed: rpc error: code = NotFound desc = an error occurred when try to find container "8a376e3e7fe7865f487ae2bdcc042150b4a949e89a748198a5c587ebff728867": not found
time="2023-08-23T19:03:50Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"8a376e3e7fe7865f487ae2bdcc042150b4a949e89a748198a5c587ebff728867\": not found"
output:
** stderr **
E0823 19:03:50.311640 5852 remote_runtime.go:329] ContainerStatus "8a376e3e7fe7865f487ae2bdcc042150b4a949e89a748198a5c587ebff728867" from runtime service failed: rpc error: code = NotFound desc = an error occurred when try to find container "8a376e3e7fe7865f487ae2bdcc042150b4a949e89a748198a5c587ebff728867": not found
time="2023-08-23T19:03:50Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"8a376e3e7fe7865f487ae2bdcc042150b4a949e89a748198a5c587ebff728867\": not found"
** /stderr **
I0823 19:03:50.314973 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:03:50.314991 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:03:50.332180 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:03:50.332208 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:03:52.921123 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:52.921755 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:52.921813 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:03:52.921870 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:03:52.941407 46108 cri.go:89] found id: "abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328"
I0823 19:03:52.941437 46108 cri.go:89] found id: ""
I0823 19:03:52.941446 46108 logs.go:284] 1 containers: [abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328]
I0823 19:03:52.941516 46108 ssh_runner.go:195] Run: which crictl
I0823 19:03:52.945832 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:03:52.945904 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:03:52.965696 46108 cri.go:89] found id: "0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0"
I0823 19:03:52.965717 46108 cri.go:89] found id: ""
I0823 19:03:52.965725 46108 logs.go:284] 1 containers: [0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0]
I0823 19:03:52.965774 46108 ssh_runner.go:195] Run: which crictl
I0823 19:03:52.970033 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:03:52.970100 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:03:52.992730 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:03:52.992752 46108 cri.go:89] found id: ""
I0823 19:03:52.992760 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:03:52.992829 46108 ssh_runner.go:195] Run: which crictl
I0823 19:03:52.997556 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:03:52.997631 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:03:53.020896 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:03:53.020927 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:03:53.020934 46108 cri.go:89] found id: ""
I0823 19:03:53.020947 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:03:53.021006 46108 ssh_runner.go:195] Run: which crictl
I0823 19:03:53.025657 46108 ssh_runner.go:195] Run: which crictl
I0823 19:03:53.029353 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:03:53.029408 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:03:53.048787 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:03:53.048809 46108 cri.go:89] found id: ""
I0823 19:03:53.048818 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:03:53.048883 46108 ssh_runner.go:195] Run: which crictl
I0823 19:03:53.052821 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:03:53.052883 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:03:53.073274 46108 cri.go:89] found id: "52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
I0823 19:03:53.073300 46108 cri.go:89] found id: "42930c6d0687e15d8f85f4f435153709e3f0a05840755d07a97438b10bdd31e4"
I0823 19:03:53.073307 46108 cri.go:89] found id: ""
I0823 19:03:53.073316 46108 logs.go:284] 2 containers: [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2 42930c6d0687e15d8f85f4f435153709e3f0a05840755d07a97438b10bdd31e4]
I0823 19:03:53.073376 46108 ssh_runner.go:195] Run: which crictl
I0823 19:03:53.077467 46108 ssh_runner.go:195] Run: which crictl
I0823 19:03:53.082419 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:03:53.082484 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:03:53.101808 46108 cri.go:89] found id: ""
I0823 19:03:53.101831 46108 logs.go:284] 0 containers: []
W0823 19:03:53.101839 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:03:53.101844 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:03:53.101900 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:03:53.127415 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:03:53.127439 46108 cri.go:89] found id: ""
I0823 19:03:53.127448 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:03:53.127501 46108 ssh_runner.go:195] Run: which crictl
I0823 19:03:53.132306 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:03:53.132336 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:03:53.216923 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:03:53.216950 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:03:53.216964 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:03:53.260783 46108 logs.go:123] Gathering logs for kube-controller-manager [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2] ...
I0823 19:03:53.260822 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
I0823 19:03:53.284064 46108 logs.go:123] Gathering logs for container status ...
I0823 19:03:53.284107 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:03:53.310696 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:03:53.310729 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:03:53.323691 46108 logs.go:123] Gathering logs for kube-apiserver [abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328] ...
I0823 19:03:53.323726 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328"
I0823 19:03:53.345293 46108 logs.go:123] Gathering logs for etcd [0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0] ...
I0823 19:03:53.345319 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0"
I0823 19:03:53.362319 46108 logs.go:123] Gathering logs for kube-controller-manager [42930c6d0687e15d8f85f4f435153709e3f0a05840755d07a97438b10bdd31e4] ...
I0823 19:03:53.362359 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 42930c6d0687e15d8f85f4f435153709e3f0a05840755d07a97438b10bdd31e4"
I0823 19:03:53.402288 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:03:53.402322 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:03:53.418818 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:03:53.418852 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:03:53.482743 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:03:53.482779 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:03:53.540645 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:03:53.540681 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:03:53.567472 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:03:53.567508 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:03:53.601354 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:03:53.601386 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:03:56.129596 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:56.130240 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:56.130283 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:03:56.130336 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:03:56.152583 46108 cri.go:89] found id: "abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328"
I0823 19:03:56.152605 46108 cri.go:89] found id: ""
I0823 19:03:56.152611 46108 logs.go:284] 1 containers: [abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328]
I0823 19:03:56.152658 46108 ssh_runner.go:195] Run: which crictl
I0823 19:03:56.158214 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:03:56.158289 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:03:56.178941 46108 cri.go:89] found id: "0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0"
I0823 19:03:56.178967 46108 cri.go:89] found id: ""
I0823 19:03:56.178977 46108 logs.go:284] 1 containers: [0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0]
I0823 19:03:56.179029 46108 ssh_runner.go:195] Run: which crictl
I0823 19:03:56.184905 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:03:56.184979 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:03:56.205181 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:03:56.205216 46108 cri.go:89] found id: ""
I0823 19:03:56.205227 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:03:56.205284 46108 ssh_runner.go:195] Run: which crictl
I0823 19:03:56.211073 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:03:56.211148 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:03:56.232446 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:03:56.232473 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:03:56.232480 46108 cri.go:89] found id: ""
I0823 19:03:56.232488 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:03:56.232550 46108 ssh_runner.go:195] Run: which crictl
I0823 19:03:56.238030 46108 ssh_runner.go:195] Run: which crictl
I0823 19:03:56.243248 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:03:56.243318 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:03:56.259395 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:03:56.259419 46108 cri.go:89] found id: ""
I0823 19:03:56.259427 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:03:56.259482 46108 ssh_runner.go:195] Run: which crictl
I0823 19:03:56.263495 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:03:56.263621 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:03:56.280830 46108 cri.go:89] found id: "52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
I0823 19:03:56.280858 46108 cri.go:89] found id: "42930c6d0687e15d8f85f4f435153709e3f0a05840755d07a97438b10bdd31e4"
I0823 19:03:56.280865 46108 cri.go:89] found id: ""
I0823 19:03:56.280874 46108 logs.go:284] 2 containers: [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2 42930c6d0687e15d8f85f4f435153709e3f0a05840755d07a97438b10bdd31e4]
I0823 19:03:56.280939 46108 ssh_runner.go:195] Run: which crictl
I0823 19:03:56.286370 46108 ssh_runner.go:195] Run: which crictl
I0823 19:03:56.290218 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:03:56.290282 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:03:56.313418 46108 cri.go:89] found id: ""
I0823 19:03:56.313440 46108 logs.go:284] 0 containers: []
W0823 19:03:56.313447 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:03:56.313454 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:03:56.313522 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:03:56.332979 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:03:56.333010 46108 cri.go:89] found id: ""
I0823 19:03:56.333018 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:03:56.333064 46108 ssh_runner.go:195] Run: which crictl
I0823 19:03:56.337242 46108 logs.go:123] Gathering logs for etcd [0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0] ...
I0823 19:03:56.337268 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0"
I0823 19:03:56.354521 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:03:56.354557 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:03:56.378351 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:03:56.378390 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:03:56.425790 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:03:56.425835 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:03:56.487011 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:03:56.487048 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:03:56.501482 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:03:56.501519 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:03:56.599128 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:03:56.599161 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:03:56.599175 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:03:56.630149 46108 logs.go:123] Gathering logs for kube-controller-manager [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2] ...
I0823 19:03:56.630188 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
I0823 19:03:56.646749 46108 logs.go:123] Gathering logs for kube-controller-manager [42930c6d0687e15d8f85f4f435153709e3f0a05840755d07a97438b10bdd31e4] ...
I0823 19:03:56.646776 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 42930c6d0687e15d8f85f4f435153709e3f0a05840755d07a97438b10bdd31e4"
I0823 19:03:56.680992 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:03:56.681083 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:03:56.754304 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:03:56.754342 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:03:56.781292 46108 logs.go:123] Gathering logs for container status ...
I0823 19:03:56.781320 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:03:56.810682 46108 logs.go:123] Gathering logs for kube-apiserver [abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328] ...
I0823 19:03:56.810709 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328"
I0823 19:03:56.839836 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:03:56.839866 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:03:59.358630 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:03:59.359423 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:03:59.359486 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:03:59.359547 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:03:59.389657 46108 cri.go:89] found id: "abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328"
I0823 19:03:59.389682 46108 cri.go:89] found id: ""
I0823 19:03:59.389691 46108 logs.go:284] 1 containers: [abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328]
I0823 19:03:59.389752 46108 ssh_runner.go:195] Run: which crictl
I0823 19:03:59.394178 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:03:59.394251 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:03:59.414275 46108 cri.go:89] found id: "0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0"
I0823 19:03:59.414303 46108 cri.go:89] found id: ""
I0823 19:03:59.414312 46108 logs.go:284] 1 containers: [0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0]
I0823 19:03:59.414378 46108 ssh_runner.go:195] Run: which crictl
I0823 19:03:59.419333 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:03:59.419410 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:03:59.440733 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:03:59.440765 46108 cri.go:89] found id: ""
I0823 19:03:59.440774 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:03:59.440830 46108 ssh_runner.go:195] Run: which crictl
I0823 19:03:59.446509 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:03:59.446586 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:03:59.468196 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:03:59.468222 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:03:59.468229 46108 cri.go:89] found id: ""
I0823 19:03:59.468238 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:03:59.468302 46108 ssh_runner.go:195] Run: which crictl
I0823 19:03:59.474500 46108 ssh_runner.go:195] Run: which crictl
I0823 19:03:59.480335 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:03:59.480397 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:03:59.504546 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:03:59.504569 46108 cri.go:89] found id: ""
I0823 19:03:59.504576 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:03:59.504627 46108 ssh_runner.go:195] Run: which crictl
I0823 19:03:59.510731 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:03:59.510815 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:03:59.529519 46108 cri.go:89] found id: "52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
I0823 19:03:59.529567 46108 cri.go:89] found id: "42930c6d0687e15d8f85f4f435153709e3f0a05840755d07a97438b10bdd31e4"
I0823 19:03:59.529574 46108 cri.go:89] found id: ""
I0823 19:03:59.529583 46108 logs.go:284] 2 containers: [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2 42930c6d0687e15d8f85f4f435153709e3f0a05840755d07a97438b10bdd31e4]
I0823 19:03:59.529646 46108 ssh_runner.go:195] Run: which crictl
I0823 19:03:59.534003 46108 ssh_runner.go:195] Run: which crictl
I0823 19:03:59.538363 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:03:59.538432 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:03:59.557294 46108 cri.go:89] found id: ""
I0823 19:03:59.557316 46108 logs.go:284] 0 containers: []
W0823 19:03:59.557323 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:03:59.557328 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:03:59.557377 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:03:59.577710 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:03:59.577733 46108 cri.go:89] found id: ""
I0823 19:03:59.577746 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:03:59.577807 46108 ssh_runner.go:195] Run: which crictl
I0823 19:03:59.583075 46108 logs.go:123] Gathering logs for etcd [0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0] ...
I0823 19:03:59.583102 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0"
I0823 19:03:59.603621 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:03:59.603659 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:03:59.649624 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:03:59.649663 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:03:59.676391 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:03:59.676422 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:03:59.758447 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:03:59.758483 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:03:59.820304 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:03:59.820346 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:03:59.903942 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:03:59.903985 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:03:59.904000 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:03:59.924560 46108 logs.go:123] Gathering logs for container status ...
I0823 19:03:59.924593 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:03:59.955653 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:03:59.955678 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:03:59.967160 46108 logs.go:123] Gathering logs for kube-apiserver [abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328] ...
I0823 19:03:59.967189 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328"
I0823 19:03:59.986487 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:03:59.986514 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:04:00.010795 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:04:00.010827 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:04:00.046262 46108 logs.go:123] Gathering logs for kube-controller-manager [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2] ...
I0823 19:04:00.046298 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
I0823 19:04:00.063940 46108 logs.go:123] Gathering logs for kube-controller-manager [42930c6d0687e15d8f85f4f435153709e3f0a05840755d07a97438b10bdd31e4] ...
I0823 19:04:00.063980 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 42930c6d0687e15d8f85f4f435153709e3f0a05840755d07a97438b10bdd31e4"
I0823 19:04:02.603550 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:04:02.604324 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:04:02.604375 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:04:02.604445 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:04:02.628170 46108 cri.go:89] found id: "abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328"
I0823 19:04:02.628193 46108 cri.go:89] found id: ""
I0823 19:04:02.628200 46108 logs.go:284] 1 containers: [abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328]
I0823 19:04:02.628254 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:02.632596 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:04:02.632671 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:04:02.653173 46108 cri.go:89] found id: "0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0"
I0823 19:04:02.653196 46108 cri.go:89] found id: ""
I0823 19:04:02.653203 46108 logs.go:284] 1 containers: [0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0]
I0823 19:04:02.653256 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:02.659210 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:04:02.659263 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:04:02.680490 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:04:02.680512 46108 cri.go:89] found id: ""
I0823 19:04:02.680519 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:04:02.680567 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:02.687686 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:04:02.687745 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:04:02.708135 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:04:02.708153 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:04:02.708157 46108 cri.go:89] found id: ""
I0823 19:04:02.708163 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:04:02.708216 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:02.712890 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:02.717324 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:04:02.717379 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:04:02.734883 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:04:02.734917 46108 cri.go:89] found id: ""
I0823 19:04:02.734927 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:04:02.734985 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:02.739344 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:04:02.739400 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:04:02.755954 46108 cri.go:89] found id: "52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
I0823 19:04:02.755981 46108 cri.go:89] found id: ""
I0823 19:04:02.755990 46108 logs.go:284] 1 containers: [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2]
I0823 19:04:02.756053 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:02.760162 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:04:02.760232 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:04:02.778881 46108 cri.go:89] found id: ""
I0823 19:04:02.778908 46108 logs.go:284] 0 containers: []
W0823 19:04:02.778919 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:04:02.778926 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:04:02.778994 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:04:02.796893 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:04:02.796918 46108 cri.go:89] found id: ""
I0823 19:04:02.796927 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:04:02.796984 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:02.802046 46108 logs.go:123] Gathering logs for etcd [0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0] ...
I0823 19:04:02.802073 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0"
I0823 19:04:02.822943 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:04:02.822979 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:04:02.851708 46108 logs.go:123] Gathering logs for kube-controller-manager [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2] ...
I0823 19:04:02.851741 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
I0823 19:04:02.889674 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:04:02.889720 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:04:02.911408 46108 logs.go:123] Gathering logs for container status ...
I0823 19:04:02.911445 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:04:02.944479 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:04:02.944504 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:04:02.970681 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:04:02.970712 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:04:02.997753 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:04:02.997785 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:04:03.060708 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:04:03.060745 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:04:03.127019 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:04:03.127056 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:04:03.140719 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:04:03.140757 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:04:03.246015 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:04:03.246042 46108 logs.go:123] Gathering logs for kube-apiserver [abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328] ...
I0823 19:04:03.246056 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328"
I0823 19:04:03.266591 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:04:03.266619 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:04:05.799418 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:04:05.800151 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:04:05.800212 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:04:05.800267 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:04:05.823657 46108 cri.go:89] found id: "abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328"
I0823 19:04:05.823679 46108 cri.go:89] found id: ""
I0823 19:04:05.823688 46108 logs.go:284] 1 containers: [abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328]
I0823 19:04:05.823743 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:05.829705 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:04:05.829775 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:04:05.850755 46108 cri.go:89] found id: "0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0"
I0823 19:04:05.850795 46108 cri.go:89] found id: ""
I0823 19:04:05.850803 46108 logs.go:284] 1 containers: [0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0]
I0823 19:04:05.850854 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:05.856211 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:04:05.856276 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:04:05.875778 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:04:05.875797 46108 cri.go:89] found id: ""
I0823 19:04:05.875806 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:04:05.875863 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:05.880835 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:04:05.880901 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:04:05.899063 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:04:05.899088 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:04:05.899095 46108 cri.go:89] found id: ""
I0823 19:04:05.899104 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:04:05.899157 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:05.903709 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:05.907885 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:04:05.907948 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:04:05.927949 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:04:05.927969 46108 cri.go:89] found id: ""
I0823 19:04:05.927976 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:04:05.928029 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:05.932434 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:04:05.932493 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:04:05.951008 46108 cri.go:89] found id: "52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
I0823 19:04:05.951031 46108 cri.go:89] found id: ""
I0823 19:04:05.951039 46108 logs.go:284] 1 containers: [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2]
I0823 19:04:05.951093 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:05.958246 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:04:05.958297 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:04:05.975436 46108 cri.go:89] found id: ""
I0823 19:04:05.975463 46108 logs.go:284] 0 containers: []
W0823 19:04:05.975474 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:04:05.975482 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:04:05.975546 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:04:05.993826 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:04:05.993874 46108 cri.go:89] found id: ""
I0823 19:04:05.993883 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:04:05.993952 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:05.998471 46108 logs.go:123] Gathering logs for etcd [0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0] ...
I0823 19:04:05.998491 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0"
I0823 19:04:06.015413 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:04:06.015450 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:04:06.039783 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:04:06.039817 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:04:06.066586 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:04:06.066624 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:04:06.102752 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:04:06.102783 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:04:06.169165 46108 logs.go:123] Gathering logs for container status ...
I0823 19:04:06.169200 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:04:06.190726 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:04:06.190756 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:04:06.208930 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:04:06.208957 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:04:06.277589 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:04:06.277635 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:04:06.289477 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:04:06.289505 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:04:06.388348 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:04:06.388374 46108 logs.go:123] Gathering logs for kube-apiserver [abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328] ...
I0823 19:04:06.388386 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328"
I0823 19:04:06.407928 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:04:06.407959 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:04:06.438719 46108 logs.go:123] Gathering logs for kube-controller-manager [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2] ...
I0823 19:04:06.438751 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
I0823 19:04:08.977132 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:04:08.977781 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:04:08.977832 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:04:08.977882 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:04:08.998294 46108 cri.go:89] found id: "abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328"
I0823 19:04:08.998315 46108 cri.go:89] found id: ""
I0823 19:04:08.998321 46108 logs.go:284] 1 containers: [abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328]
I0823 19:04:08.998371 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:09.002307 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:04:09.002377 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:04:09.023257 46108 cri.go:89] found id: "0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0"
I0823 19:04:09.023292 46108 cri.go:89] found id: ""
I0823 19:04:09.023308 46108 logs.go:284] 1 containers: [0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0]
I0823 19:04:09.023371 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:09.027561 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:04:09.027630 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:04:09.044233 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:04:09.044253 46108 cri.go:89] found id: ""
I0823 19:04:09.044259 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:04:09.044312 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:09.048205 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:04:09.048275 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:04:09.064091 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:04:09.064114 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:04:09.064119 46108 cri.go:89] found id: ""
I0823 19:04:09.064125 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:04:09.064175 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:09.068223 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:09.072391 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:04:09.072457 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:04:09.089261 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:04:09.089285 46108 cri.go:89] found id: ""
I0823 19:04:09.089293 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:04:09.089351 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:09.093647 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:04:09.093713 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:04:09.110349 46108 cri.go:89] found id: "52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
I0823 19:04:09.110366 46108 cri.go:89] found id: ""
I0823 19:04:09.110372 46108 logs.go:284] 1 containers: [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2]
I0823 19:04:09.110415 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:09.114495 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:04:09.114558 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:04:09.133422 46108 cri.go:89] found id: ""
I0823 19:04:09.133446 46108 logs.go:284] 0 containers: []
W0823 19:04:09.133456 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:04:09.133464 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:04:09.133512 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:04:09.149623 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:04:09.149645 46108 cri.go:89] found id: ""
I0823 19:04:09.149653 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:04:09.149715 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:09.153567 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:04:09.153599 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:04:09.171390 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:04:09.171416 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:04:09.241594 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:04:09.241636 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:04:09.252767 46108 logs.go:123] Gathering logs for kube-apiserver [abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328] ...
I0823 19:04:09.252793 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328"
I0823 19:04:09.283901 46108 logs.go:123] Gathering logs for etcd [0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0] ...
I0823 19:04:09.283937 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0"
I0823 19:04:09.299355 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:04:09.299386 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:04:09.320130 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:04:09.320166 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:04:09.349557 46108 logs.go:123] Gathering logs for kube-controller-manager [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2] ...
I0823 19:04:09.349587 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
I0823 19:04:09.381178 46108 logs.go:123] Gathering logs for container status ...
I0823 19:04:09.381211 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:04:09.407571 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:04:09.407600 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:04:09.468555 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:04:09.468593 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:04:09.560084 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:04:09.561144 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:04:09.561163 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:04:09.599559 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:04:09.599590 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:04:12.146416 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:04:17.147709 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0823 19:04:17.147788 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:04:17.147842 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:04:17.167915 46108 cri.go:89] found id: "06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
I0823 19:04:17.167944 46108 cri.go:89] found id: "abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328"
I0823 19:04:17.167953 46108 cri.go:89] found id: ""
I0823 19:04:17.167967 46108 logs.go:284] 2 containers: [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328]
I0823 19:04:17.168025 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:17.172621 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:17.176588 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:04:17.176637 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:04:17.194115 46108 cri.go:89] found id: "0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0"
I0823 19:04:17.194137 46108 cri.go:89] found id: ""
I0823 19:04:17.194146 46108 logs.go:284] 1 containers: [0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0]
I0823 19:04:17.194195 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:17.198195 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:04:17.198249 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:04:17.212835 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:04:17.212857 46108 cri.go:89] found id: ""
I0823 19:04:17.212866 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:04:17.212915 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:17.216741 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:04:17.216802 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:04:17.237109 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:04:17.237138 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:04:17.237144 46108 cri.go:89] found id: ""
I0823 19:04:17.237153 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:04:17.237215 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:17.241499 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:17.246670 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:04:17.246738 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:04:17.267560 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:04:17.267586 46108 cri.go:89] found id: ""
I0823 19:04:17.267596 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:04:17.267654 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:17.272746 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:04:17.272818 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:04:17.288413 46108 cri.go:89] found id: "52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
I0823 19:04:17.288431 46108 cri.go:89] found id: ""
I0823 19:04:17.288439 46108 logs.go:284] 1 containers: [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2]
I0823 19:04:17.288497 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:17.293366 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:04:17.293413 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:04:17.308748 46108 cri.go:89] found id: ""
I0823 19:04:17.308774 46108 logs.go:284] 0 containers: []
W0823 19:04:17.308785 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:04:17.308792 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:04:17.308852 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:04:17.329847 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:04:17.329872 46108 cri.go:89] found id: ""
I0823 19:04:17.329881 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:04:17.329936 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:17.335095 46108 logs.go:123] Gathering logs for kube-controller-manager [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2] ...
I0823 19:04:17.335121 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
I0823 19:04:17.373018 46108 logs.go:123] Gathering logs for kube-apiserver [abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328] ...
I0823 19:04:17.373057 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328"
I0823 19:04:17.395253 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:04:17.395278 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:04:17.425070 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:04:17.425110 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:04:17.466206 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:04:17.466234 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:04:17.491846 46108 logs.go:123] Gathering logs for container status ...
I0823 19:04:17.491876 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:04:17.519607 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:04:17.519635 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0823 19:04:27.618486 46108 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.09883103s)
W0823 19:04:27.618554 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
Unable to connect to the server: net/http: TLS handshake timeout
output:
** stderr **
Unable to connect to the server: net/http: TLS handshake timeout
** /stderr **
I0823 19:04:27.618566 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:04:27.618580 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:04:27.641768 46108 logs.go:123] Gathering logs for kube-apiserver [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc] ...
I0823 19:04:27.641793 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
I0823 19:04:27.669512 46108 logs.go:123] Gathering logs for etcd [0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0] ...
I0823 19:04:27.669550 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0"
W0823 19:04:27.686673 46108 logs.go:130] failed etcd [0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0]: command: /bin/bash -c "sudo /bin/crictl logs --tail 400 0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0" /bin/bash -c "sudo /bin/crictl logs --tail 400 0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0": Process exited with status 1
stdout:
stderr:
E0823 19:04:27.682272 6645 remote_runtime.go:329] ContainerStatus "0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0" from runtime service failed: rpc error: code = NotFound desc = an error occurred when try to find container "0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0": not found
time="2023-08-23T19:04:27Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0\": not found"
output:
** stderr **
E0823 19:04:27.682272 6645 remote_runtime.go:329] ContainerStatus "0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0" from runtime service failed: rpc error: code = NotFound desc = an error occurred when try to find container "0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0": not found
time="2023-08-23T19:04:27Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"0d3a9ff4b0326789c9e391f9b19b9dad42da614ae600fb09c617c6dfbbcbeef0\": not found"
** /stderr **
I0823 19:04:27.686697 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:04:27.686711 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:04:27.750311 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:04:27.750344 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:04:27.813436 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:04:27.813471 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:04:27.833635 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:04:27.833661 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:04:30.351863 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:04:30.550221 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": read tcp 192.168.61.1:36772->192.168.61.47:8443: read: connection reset by peer
I0823 19:04:30.550285 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:04:30.550353 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:04:30.570519 46108 cri.go:89] found id: "06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
I0823 19:04:30.570544 46108 cri.go:89] found id: "abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328"
I0823 19:04:30.570550 46108 cri.go:89] found id: ""
I0823 19:04:30.570558 46108 logs.go:284] 2 containers: [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328]
I0823 19:04:30.570614 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:30.576052 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:30.580004 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:04:30.580086 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:04:30.609883 46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:04:30.609908 46108 cri.go:89] found id: ""
I0823 19:04:30.609917 46108 logs.go:284] 1 containers: [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
I0823 19:04:30.609965 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:30.615842 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:04:30.615917 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:04:30.647642 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:04:30.647665 46108 cri.go:89] found id: ""
I0823 19:04:30.647673 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:04:30.647741 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:30.652938 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:04:30.653002 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:04:30.675187 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:04:30.675215 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:04:30.675222 46108 cri.go:89] found id: ""
I0823 19:04:30.675231 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:04:30.675288 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:30.680341 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:30.685856 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:04:30.685932 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:04:30.706478 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:04:30.706504 46108 cri.go:89] found id: ""
I0823 19:04:30.706513 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:04:30.706569 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:30.711231 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:04:30.711297 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:04:30.728230 46108 cri.go:89] found id: "52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
I0823 19:04:30.728257 46108 cri.go:89] found id: ""
I0823 19:04:30.728267 46108 logs.go:284] 1 containers: [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2]
I0823 19:04:30.728335 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:30.734320 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:04:30.734392 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:04:30.751781 46108 cri.go:89] found id: ""
I0823 19:04:30.751806 46108 logs.go:284] 0 containers: []
W0823 19:04:30.751816 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:04:30.751824 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:04:30.751882 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:04:30.774806 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:04:30.774831 46108 cri.go:89] found id: ""
I0823 19:04:30.774840 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:04:30.774904 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:30.779712 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:04:30.779742 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:04:30.799413 46108 logs.go:123] Gathering logs for container status ...
I0823 19:04:30.799447 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:04:30.828917 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:04:30.828947 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:04:30.893361 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:04:30.893395 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:04:30.989250 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:04:30.989272 46108 logs.go:123] Gathering logs for kube-apiserver [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc] ...
I0823 19:04:30.989282 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
I0823 19:04:31.016789 46108 logs.go:123] Gathering logs for kube-apiserver [abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328] ...
I0823 19:04:31.016820 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abcdb35e8a89026753a6f3f06cead40f6630471dfe6e99bb86c6b458c51ca328"
I0823 19:04:31.038094 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:04:31.038124 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:04:31.052980 46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
I0823 19:04:31.053011 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:04:31.070711 46108 logs.go:123] Gathering logs for kube-controller-manager [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2] ...
I0823 19:04:31.070742 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
I0823 19:04:31.110828 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:04:31.110861 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:04:31.204670 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:04:31.204705 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:04:31.225462 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:04:31.225504 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:04:31.263445 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:04:31.263478 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:04:31.293188 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:04:31.293226 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:04:33.826359 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:04:33.827025 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:04:33.827079 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:04:33.827133 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:04:33.846362 46108 cri.go:89] found id: "06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
I0823 19:04:33.846392 46108 cri.go:89] found id: ""
I0823 19:04:33.846401 46108 logs.go:284] 1 containers: [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc]
I0823 19:04:33.846451 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:33.850535 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:04:33.850595 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:04:33.868301 46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:04:33.868323 46108 cri.go:89] found id: ""
I0823 19:04:33.868331 46108 logs.go:284] 1 containers: [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
I0823 19:04:33.868386 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:33.872403 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:04:33.872488 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:04:33.892188 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:04:33.892217 46108 cri.go:89] found id: ""
I0823 19:04:33.892226 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:04:33.892285 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:33.896023 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:04:33.896080 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:04:33.913400 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:04:33.913420 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:04:33.913425 46108 cri.go:89] found id: ""
I0823 19:04:33.913431 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:04:33.913479 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:33.918329 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:33.923040 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:04:33.923112 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:04:33.943496 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:04:33.943523 46108 cri.go:89] found id: ""
I0823 19:04:33.943533 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:04:33.943590 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:33.947871 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:04:33.947924 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:04:33.967460 46108 cri.go:89] found id: "52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
I0823 19:04:33.967478 46108 cri.go:89] found id: ""
I0823 19:04:33.967486 46108 logs.go:284] 1 containers: [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2]
I0823 19:04:33.967550 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:33.972019 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:04:33.972083 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:04:33.992206 46108 cri.go:89] found id: ""
I0823 19:04:33.992230 46108 logs.go:284] 0 containers: []
W0823 19:04:33.992239 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:04:33.992248 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:04:33.992305 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:04:34.012861 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:04:34.012884 46108 cri.go:89] found id: ""
I0823 19:04:34.012892 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:04:34.012956 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:34.018211 46108 logs.go:123] Gathering logs for kube-apiserver [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc] ...
I0823 19:04:34.018243 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
I0823 19:04:34.042458 46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
I0823 19:04:34.042492 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:04:34.061290 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:04:34.061317 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:04:34.113097 46108 logs.go:123] Gathering logs for container status ...
I0823 19:04:34.113134 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:04:34.138722 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:04:34.138748 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:04:34.151729 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:04:34.151752 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:04:34.245758 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:04:34.245779 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:04:34.245794 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:04:34.265608 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:04:34.265637 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:04:34.290654 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:04:34.290683 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:04:34.322342 46108 logs.go:123] Gathering logs for kube-controller-manager [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2] ...
I0823 19:04:34.322383 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
I0823 19:04:34.363350 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:04:34.363394 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:04:34.384170 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:04:34.384197 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:04:34.453756 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:04:34.453799 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:04:37.023185 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:04:37.023946 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:04:37.023993 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:04:37.024036 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:04:37.050342 46108 cri.go:89] found id: "06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
I0823 19:04:37.050366 46108 cri.go:89] found id: ""
I0823 19:04:37.050375 46108 logs.go:284] 1 containers: [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc]
I0823 19:04:37.050430 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:37.054902 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:04:37.054953 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:04:37.073038 46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:04:37.073059 46108 cri.go:89] found id: ""
I0823 19:04:37.073068 46108 logs.go:284] 1 containers: [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
I0823 19:04:37.073122 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:37.077691 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:04:37.077761 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:04:37.095129 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:04:37.095151 46108 cri.go:89] found id: ""
I0823 19:04:37.095160 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:04:37.095215 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:37.099250 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:04:37.099308 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:04:37.117187 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:04:37.117205 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:04:37.117211 46108 cri.go:89] found id: ""
I0823 19:04:37.117219 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:04:37.117276 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:37.122142 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:37.127299 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:04:37.127365 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:04:37.144191 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:04:37.144213 46108 cri.go:89] found id: ""
I0823 19:04:37.144220 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:04:37.144265 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:37.150347 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:04:37.150404 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:04:37.170969 46108 cri.go:89] found id: "5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
I0823 19:04:37.170989 46108 cri.go:89] found id: "52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
I0823 19:04:37.170995 46108 cri.go:89] found id: ""
I0823 19:04:37.171003 46108 logs.go:284] 2 containers: [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636 52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2]
I0823 19:04:37.171051 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:37.175726 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:37.181727 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:04:37.181776 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:04:37.199831 46108 cri.go:89] found id: ""
I0823 19:04:37.199856 46108 logs.go:284] 0 containers: []
W0823 19:04:37.199866 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:04:37.199873 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:04:37.199931 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:04:37.217009 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:04:37.217030 46108 cri.go:89] found id: ""
I0823 19:04:37.217038 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:04:37.217075 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:37.221307 46108 logs.go:123] Gathering logs for container status ...
I0823 19:04:37.221328 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:04:37.243240 46108 logs.go:123] Gathering logs for kube-apiserver [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc] ...
I0823 19:04:37.243265 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
I0823 19:04:37.266080 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:04:37.266108 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:04:37.287448 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:04:37.287476 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:04:37.313643 46108 logs.go:123] Gathering logs for kube-controller-manager [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636] ...
I0823 19:04:37.313670 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
I0823 19:04:37.332010 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:04:37.332036 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:04:37.401934 46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
I0823 19:04:37.401966 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:04:37.423032 46108 logs.go:123] Gathering logs for kube-controller-manager [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2] ...
I0823 19:04:37.423051 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
I0823 19:04:37.460361 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:04:37.460389 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:04:37.483235 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:04:37.483267 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:04:37.517899 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:04:37.517927 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:04:37.548071 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:04:37.548103 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:04:37.619832 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:04:37.619866 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:04:37.631690 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:04:37.631723 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:04:37.730233 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:04:40.230746 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:04:40.231346 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:04:40.231403 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:04:40.231464 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:04:40.256053 46108 cri.go:89] found id: "06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
I0823 19:04:40.256079 46108 cri.go:89] found id: ""
I0823 19:04:40.256087 46108 logs.go:284] 1 containers: [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc]
I0823 19:04:40.256140 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:40.261394 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:04:40.261461 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:04:40.282848 46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:04:40.282868 46108 cri.go:89] found id: ""
I0823 19:04:40.282877 46108 logs.go:284] 1 containers: [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
I0823 19:04:40.282924 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:40.287836 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:04:40.287902 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:04:40.307273 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:04:40.307295 46108 cri.go:89] found id: ""
I0823 19:04:40.307303 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:04:40.307352 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:40.313523 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:04:40.313606 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:04:40.330071 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:04:40.330088 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:04:40.330091 46108 cri.go:89] found id: ""
I0823 19:04:40.330098 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:04:40.330140 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:40.334144 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:40.339025 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:04:40.339076 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:04:40.359547 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:04:40.359568 46108 cri.go:89] found id: ""
I0823 19:04:40.359577 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:04:40.359632 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:40.364039 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:04:40.364107 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:04:40.382590 46108 cri.go:89] found id: "5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
I0823 19:04:40.382617 46108 cri.go:89] found id: "52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
I0823 19:04:40.382641 46108 cri.go:89] found id: ""
I0823 19:04:40.382648 46108 logs.go:284] 2 containers: [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636 52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2]
I0823 19:04:40.382696 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:40.386839 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:40.390744 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:04:40.390806 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:04:40.408339 46108 cri.go:89] found id: ""
I0823 19:04:40.408361 46108 logs.go:284] 0 containers: []
W0823 19:04:40.408368 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:04:40.408374 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:04:40.408422 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:04:40.433691 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:04:40.433716 46108 cri.go:89] found id: ""
I0823 19:04:40.433725 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:04:40.433775 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:40.440794 46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
I0823 19:04:40.440825 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:04:40.467202 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:04:40.467239 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:04:40.501843 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:04:40.501874 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:04:40.577973 46108 logs.go:123] Gathering logs for container status ...
I0823 19:04:40.578008 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:04:40.605799 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:04:40.605838 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:04:40.620098 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:04:40.620133 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:04:40.725365 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:04:40.725393 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:04:40.725406 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:04:40.751398 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:04:40.751433 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:04:40.815756 46108 logs.go:123] Gathering logs for kube-apiserver [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc] ...
I0823 19:04:40.815786 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
I0823 19:04:40.841439 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:04:40.841470 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:04:40.868326 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:04:40.868363 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:04:40.908012 46108 logs.go:123] Gathering logs for kube-controller-manager [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636] ...
I0823 19:04:40.908057 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
I0823 19:04:40.931270 46108 logs.go:123] Gathering logs for kube-controller-manager [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2] ...
I0823 19:04:40.931304 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
I0823 19:04:40.970295 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:04:40.970326 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:04:43.493052 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:04:43.493798 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:04:43.493843 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:04:43.493899 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:04:43.514176 46108 cri.go:89] found id: "06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
I0823 19:04:43.514200 46108 cri.go:89] found id: ""
I0823 19:04:43.514211 46108 logs.go:284] 1 containers: [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc]
I0823 19:04:43.514270 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:43.518295 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:04:43.518362 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:04:43.536645 46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:04:43.536670 46108 cri.go:89] found id: ""
I0823 19:04:43.536679 46108 logs.go:284] 1 containers: [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
I0823 19:04:43.536726 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:43.540651 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:04:43.540715 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:04:43.556125 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:04:43.556149 46108 cri.go:89] found id: ""
I0823 19:04:43.556158 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:04:43.556212 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:43.560202 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:04:43.560265 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:04:43.578794 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:04:43.578816 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:04:43.578820 46108 cri.go:89] found id: ""
I0823 19:04:43.578827 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:04:43.578869 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:43.583167 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:43.587509 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:04:43.587579 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:04:43.603744 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:04:43.603770 46108 cri.go:89] found id: ""
I0823 19:04:43.603780 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:04:43.603831 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:43.607821 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:04:43.607892 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:04:43.626283 46108 cri.go:89] found id: "5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
I0823 19:04:43.626303 46108 cri.go:89] found id: "52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
I0823 19:04:43.626306 46108 cri.go:89] found id: ""
I0823 19:04:43.626313 46108 logs.go:284] 2 containers: [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636 52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2]
I0823 19:04:43.626356 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:43.630632 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:43.634182 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:04:43.634235 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:04:43.657504 46108 cri.go:89] found id: ""
I0823 19:04:43.657529 46108 logs.go:284] 0 containers: []
W0823 19:04:43.657536 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:04:43.657560 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:04:43.657615 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:04:43.680354 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:04:43.680373 46108 cri.go:89] found id: ""
I0823 19:04:43.680382 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:04:43.680438 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:43.684968 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:04:43.684988 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:04:43.724936 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:04:43.724981 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:04:43.747218 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:04:43.747247 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:04:43.814673 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:04:43.814707 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:04:43.843059 46108 logs.go:123] Gathering logs for kube-controller-manager [52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2] ...
I0823 19:04:43.843088 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 52797002f6908f2e51a90003e7a21566de731690d0d96e2b38f0be42c16ec5c2"
I0823 19:04:43.881388 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:04:43.881430 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:04:43.968570 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:04:43.968596 46108 logs.go:123] Gathering logs for container status ...
I0823 19:04:43.968610 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:04:43.994463 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:04:43.994493 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:04:44.004592 46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
I0823 19:04:44.004619 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:04:44.024487 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:04:44.024515 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:04:44.044095 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:04:44.044126 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:04:44.080196 46108 logs.go:123] Gathering logs for kube-controller-manager [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636] ...
I0823 19:04:44.080234 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
I0823 19:04:44.101008 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:04:44.101043 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:04:44.174655 46108 logs.go:123] Gathering logs for kube-apiserver [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc] ...
I0823 19:04:44.174688 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
I0823 19:04:46.696346 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:04:46.696930 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:04:46.696983 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:04:46.697022 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:04:46.715814 46108 cri.go:89] found id: "06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
I0823 19:04:46.715837 46108 cri.go:89] found id: ""
I0823 19:04:46.715847 46108 logs.go:284] 1 containers: [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc]
I0823 19:04:46.715903 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:46.720540 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:04:46.720607 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:04:46.738601 46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:04:46.738625 46108 cri.go:89] found id: ""
I0823 19:04:46.738634 46108 logs.go:284] 1 containers: [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
I0823 19:04:46.738690 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:46.742455 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:04:46.742518 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:04:46.759354 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:04:46.759379 46108 cri.go:89] found id: ""
I0823 19:04:46.759388 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:04:46.759439 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:46.763540 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:04:46.763603 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:04:46.780565 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:04:46.780588 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:04:46.780595 46108 cri.go:89] found id: ""
I0823 19:04:46.780602 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:04:46.780655 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:46.784494 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:46.789519 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:04:46.789601 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:04:46.804832 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:04:46.804851 46108 cri.go:89] found id: ""
I0823 19:04:46.804860 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:04:46.804919 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:46.808776 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:04:46.808833 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:04:46.825754 46108 cri.go:89] found id: "5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
I0823 19:04:46.825776 46108 cri.go:89] found id: ""
I0823 19:04:46.825784 46108 logs.go:284] 1 containers: [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636]
I0823 19:04:46.825838 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:46.829497 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:04:46.829559 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:04:46.846726 46108 cri.go:89] found id: ""
I0823 19:04:46.846750 46108 logs.go:284] 0 containers: []
W0823 19:04:46.846759 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:04:46.846767 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:04:46.846823 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:04:46.863686 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:04:46.863710 46108 cri.go:89] found id: ""
I0823 19:04:46.863718 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:04:46.863772 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:46.867477 46108 logs.go:123] Gathering logs for kube-apiserver [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc] ...
I0823 19:04:46.867497 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
I0823 19:04:46.888008 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:04:46.888037 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:04:46.912444 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:04:46.912471 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:04:46.949715 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:04:46.949745 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:04:46.980070 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:04:46.980103 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:04:47.049168 46108 logs.go:123] Gathering logs for container status ...
I0823 19:04:47.049210 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:04:47.074971 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:04:47.075010 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:04:47.147532 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:04:47.147564 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:04:47.159764 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:04:47.159813 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:04:47.247590 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:04:47.247621 46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
I0823 19:04:47.247635 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:04:47.264857 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:04:47.264885 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:04:47.307165 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:04:47.307201 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:04:47.333410 46108 logs.go:123] Gathering logs for kube-controller-manager [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636] ...
I0823 19:04:47.333453 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
I0823 19:04:49.874688 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:04:49.875267 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:04:49.875313 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:04:49.875361 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:04:49.893504 46108 cri.go:89] found id: "06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
I0823 19:04:49.893527 46108 cri.go:89] found id: ""
I0823 19:04:49.893536 46108 logs.go:284] 1 containers: [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc]
I0823 19:04:49.893609 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:49.897743 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:04:49.897811 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:04:49.916405 46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:04:49.916427 46108 cri.go:89] found id: ""
I0823 19:04:49.916437 46108 logs.go:284] 1 containers: [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
I0823 19:04:49.916499 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:49.921706 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:04:49.921774 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:04:49.940758 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:04:49.940780 46108 cri.go:89] found id: ""
I0823 19:04:49.940789 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:04:49.940842 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:49.944971 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:04:49.945041 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:04:49.963866 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:04:49.963887 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:04:49.963891 46108 cri.go:89] found id: ""
I0823 19:04:49.963897 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:04:49.963939 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:49.968271 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:49.972063 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:04:49.972131 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:04:49.989051 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:04:49.989096 46108 cri.go:89] found id: ""
I0823 19:04:49.989106 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:04:49.989166 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:49.992874 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:04:49.992936 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:04:50.008836 46108 cri.go:89] found id: "5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
I0823 19:04:50.008862 46108 cri.go:89] found id: ""
I0823 19:04:50.008871 46108 logs.go:284] 1 containers: [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636]
I0823 19:04:50.008934 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:50.013122 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:04:50.013198 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:04:50.028587 46108 cri.go:89] found id: ""
I0823 19:04:50.028610 46108 logs.go:284] 0 containers: []
W0823 19:04:50.028620 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:04:50.028628 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:04:50.028690 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:04:50.045391 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:04:50.045418 46108 cri.go:89] found id: ""
I0823 19:04:50.045427 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:04:50.045479 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:50.050677 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:04:50.050701 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:04:50.092067 46108 logs.go:123] Gathering logs for kube-apiserver [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc] ...
I0823 19:04:50.092101 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
I0823 19:04:50.115413 46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
I0823 19:04:50.115450 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:04:50.133086 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:04:50.133116 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:04:50.221813 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:04:50.221842 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:04:50.221856 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:04:50.250981 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:04:50.251009 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:04:50.273652 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:04:50.273683 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:04:50.301973 46108 logs.go:123] Gathering logs for kube-controller-manager [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636] ...
I0823 19:04:50.302008 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
I0823 19:04:50.336341 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:04:50.336377 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:04:50.354493 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:04:50.354525 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:04:50.418714 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:04:50.418756 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:04:50.430688 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:04:50.430716 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:04:50.496924 46108 logs.go:123] Gathering logs for container status ...
I0823 19:04:50.496973 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:04:53.018726 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:04:53.019380 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:04:53.019429 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:04:53.019471 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:04:53.037622 46108 cri.go:89] found id: "06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
I0823 19:04:53.037642 46108 cri.go:89] found id: ""
I0823 19:04:53.037649 46108 logs.go:284] 1 containers: [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc]
I0823 19:04:53.037706 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:53.041854 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:04:53.041923 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:04:53.062451 46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:04:53.062473 46108 cri.go:89] found id: ""
I0823 19:04:53.062481 46108 logs.go:284] 1 containers: [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
I0823 19:04:53.062536 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:53.067317 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:04:53.067388 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:04:53.086936 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:04:53.086976 46108 cri.go:89] found id: ""
I0823 19:04:53.086985 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:04:53.087049 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:53.091960 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:04:53.092032 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:04:53.111873 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:04:53.111897 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:04:53.111904 46108 cri.go:89] found id: ""
I0823 19:04:53.111912 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:04:53.111972 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:53.116680 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:53.121269 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:04:53.121323 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:04:53.143085 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:04:53.143106 46108 cri.go:89] found id: ""
I0823 19:04:53.143117 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:04:53.143177 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:53.148747 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:04:53.148816 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:04:53.169554 46108 cri.go:89] found id: "5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
I0823 19:04:53.169575 46108 cri.go:89] found id: ""
I0823 19:04:53.169582 46108 logs.go:284] 1 containers: [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636]
I0823 19:04:53.169636 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:53.173508 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:04:53.173586 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:04:53.192842 46108 cri.go:89] found id: ""
I0823 19:04:53.192867 46108 logs.go:284] 0 containers: []
W0823 19:04:53.192876 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:04:53.192883 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:04:53.192941 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:04:53.212551 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:04:53.212576 46108 cri.go:89] found id: ""
I0823 19:04:53.212585 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:04:53.212640 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:53.216429 46108 logs.go:123] Gathering logs for container status ...
I0823 19:04:53.216455 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:04:53.246843 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:04:53.246870 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:04:53.259496 46108 logs.go:123] Gathering logs for kube-apiserver [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc] ...
I0823 19:04:53.259591 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
I0823 19:04:53.281158 46108 logs.go:123] Gathering logs for kube-controller-manager [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636] ...
I0823 19:04:53.281195 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
I0823 19:04:53.323763 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:04:53.323802 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:04:53.347834 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:04:53.347869 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:04:53.383646 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:04:53.383680 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:04:53.406649 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:04:53.406686 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:04:53.436771 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:04:53.436806 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:04:53.454754 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:04:53.454791 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:04:53.517906 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:04:53.517937 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:04:53.594842 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:04:53.594874 46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
I0823 19:04:53.594890 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:04:53.612568 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:04:53.612601 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:04:56.184122 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:04:56.184837 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:04:56.184903 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:04:56.184964 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:04:56.204535 46108 cri.go:89] found id: "06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
I0823 19:04:56.204553 46108 cri.go:89] found id: ""
I0823 19:04:56.204561 46108 logs.go:284] 1 containers: [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc]
I0823 19:04:56.204615 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:56.209206 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:04:56.209268 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:04:56.225202 46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:04:56.225228 46108 cri.go:89] found id: ""
I0823 19:04:56.225237 46108 logs.go:284] 1 containers: [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
I0823 19:04:56.225295 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:56.229865 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:04:56.229925 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:04:56.245380 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:04:56.245403 46108 cri.go:89] found id: ""
I0823 19:04:56.245411 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:04:56.245463 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:56.249348 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:04:56.249407 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:04:56.265234 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:04:56.265259 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:04:56.265266 46108 cri.go:89] found id: ""
I0823 19:04:56.265274 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:04:56.265328 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:56.269742 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:56.274208 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:04:56.274267 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:04:56.291420 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:04:56.291442 46108 cri.go:89] found id: ""
I0823 19:04:56.291451 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:04:56.291504 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:56.295425 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:04:56.295491 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:04:56.314242 46108 cri.go:89] found id: "5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
I0823 19:04:56.314264 46108 cri.go:89] found id: ""
I0823 19:04:56.314272 46108 logs.go:284] 1 containers: [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636]
I0823 19:04:56.314333 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:56.318433 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:04:56.318502 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:04:56.337506 46108 cri.go:89] found id: ""
I0823 19:04:56.337527 46108 logs.go:284] 0 containers: []
W0823 19:04:56.337535 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:04:56.337558 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:04:56.337618 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:04:56.356339 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:04:56.356364 46108 cri.go:89] found id: ""
I0823 19:04:56.356374 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:04:56.356421 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:56.360620 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:04:56.360649 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:04:56.393943 46108 logs.go:123] Gathering logs for kube-controller-manager [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636] ...
I0823 19:04:56.393980 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
I0823 19:04:56.442409 46108 logs.go:123] Gathering logs for container status ...
I0823 19:04:56.442449 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:04:56.482753 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:04:56.482784 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:04:56.558447 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:04:56.558483 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:04:56.572072 46108 logs.go:123] Gathering logs for kube-apiserver [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc] ...
I0823 19:04:56.572113 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
I0823 19:04:56.594869 46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
I0823 19:04:56.594894 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:04:56.616072 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:04:56.616109 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:04:56.634784 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:04:56.634810 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:04:56.736082 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:04:56.736114 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:04:56.820648 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:04:56.820673 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:04:56.820687 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:04:56.867045 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:04:56.867088 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:04:56.892957 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:04:56.893002 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:04:59.423015 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:04:59.423805 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:04:59.423860 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:04:59.423919 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:04:59.444448 46108 cri.go:89] found id: "06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
I0823 19:04:59.444466 46108 cri.go:89] found id: ""
I0823 19:04:59.444472 46108 logs.go:284] 1 containers: [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc]
I0823 19:04:59.444515 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:59.448579 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:04:59.448639 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:04:59.465677 46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:04:59.465696 46108 cri.go:89] found id: ""
I0823 19:04:59.465705 46108 logs.go:284] 1 containers: [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
I0823 19:04:59.465761 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:59.471324 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:04:59.471405 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:04:59.490341 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:04:59.490358 46108 cri.go:89] found id: ""
I0823 19:04:59.490365 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:04:59.490419 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:59.495979 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:04:59.496053 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:04:59.514142 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:04:59.514166 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:04:59.514173 46108 cri.go:89] found id: ""
I0823 19:04:59.514181 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:04:59.514243 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:59.518120 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:59.521741 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:04:59.521792 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:04:59.537474 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:04:59.537497 46108 cri.go:89] found id: ""
I0823 19:04:59.537506 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:04:59.537574 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:59.541355 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:04:59.541417 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:04:59.557486 46108 cri.go:89] found id: "5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
I0823 19:04:59.557507 46108 cri.go:89] found id: ""
I0823 19:04:59.557516 46108 logs.go:284] 1 containers: [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636]
I0823 19:04:59.557581 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:59.562325 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:04:59.562387 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:04:59.579288 46108 cri.go:89] found id: ""
I0823 19:04:59.579325 46108 logs.go:284] 0 containers: []
W0823 19:04:59.579334 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:04:59.579342 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:04:59.579397 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:04:59.598389 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:04:59.598416 46108 cri.go:89] found id: ""
I0823 19:04:59.598426 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:04:59.598484 46108 ssh_runner.go:195] Run: which crictl
I0823 19:04:59.606603 46108 logs.go:123] Gathering logs for kube-apiserver [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc] ...
I0823 19:04:59.606634 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
I0823 19:04:59.630620 46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
I0823 19:04:59.630648 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:04:59.649254 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:04:59.649292 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:04:59.682830 46108 logs.go:123] Gathering logs for kube-controller-manager [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636] ...
I0823 19:04:59.682870 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
I0823 19:04:59.726266 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:04:59.726301 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:04:59.776539 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:04:59.776585 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:04:59.804619 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:04:59.804660 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:04:59.826112 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:04:59.826148 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:04:59.918310 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:04:59.918345 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:04:59.986811 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:04:59.986845 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:04:59.999558 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:04:59.999584 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:05:00.085399 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:05:00.085425 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:05:00.085440 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:05:00.105800 46108 logs.go:123] Gathering logs for container status ...
I0823 19:05:00.105833 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:05:02.636341 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:05:02.636901 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:05:02.636947 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:05:02.636988 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:05:02.653337 46108 cri.go:89] found id: "06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
I0823 19:05:02.653363 46108 cri.go:89] found id: ""
I0823 19:05:02.653372 46108 logs.go:284] 1 containers: [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc]
I0823 19:05:02.653424 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:02.657063 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:05:02.657118 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:05:02.673116 46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:05:02.673132 46108 cri.go:89] found id: ""
I0823 19:05:02.673138 46108 logs.go:284] 1 containers: [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
I0823 19:05:02.673187 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:02.676702 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:05:02.676757 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:05:02.693617 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:05:02.693632 46108 cri.go:89] found id: ""
I0823 19:05:02.693639 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:05:02.693686 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:02.697578 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:05:02.697630 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:05:02.713130 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:05:02.713146 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:05:02.713150 46108 cri.go:89] found id: ""
I0823 19:05:02.713158 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:05:02.713211 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:02.716808 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:02.720760 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:05:02.720830 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:05:02.738432 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:05:02.738457 46108 cri.go:89] found id: ""
I0823 19:05:02.738467 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:05:02.738526 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:02.742127 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:05:02.742172 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:05:02.758113 46108 cri.go:89] found id: "5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
I0823 19:05:02.758137 46108 cri.go:89] found id: ""
I0823 19:05:02.758146 46108 logs.go:284] 1 containers: [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636]
I0823 19:05:02.758192 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:02.762213 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:05:02.762266 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:05:02.781167 46108 cri.go:89] found id: ""
I0823 19:05:02.781191 46108 logs.go:284] 0 containers: []
W0823 19:05:02.781201 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:05:02.781209 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:05:02.781269 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:05:02.799120 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:05:02.799147 46108 cri.go:89] found id: ""
I0823 19:05:02.799155 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:05:02.799204 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:02.803080 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:05:02.803097 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:05:02.819859 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:05:02.819882 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:05:02.880972 46108 logs.go:123] Gathering logs for kube-apiserver [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc] ...
I0823 19:05:02.881002 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
I0823 19:05:02.902597 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:05:02.902625 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:05:02.923740 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:05:02.923775 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:05:02.962880 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:05:02.962912 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:05:02.988408 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:05:02.988436 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:05:03.026355 46108 logs.go:123] Gathering logs for kube-controller-manager [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636] ...
I0823 19:05:03.026388 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
I0823 19:05:03.058475 46108 logs.go:123] Gathering logs for container status ...
I0823 19:05:03.058507 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:05:03.079946 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:05:03.079975 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:05:03.153560 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:05:03.153611 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:05:03.165939 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:05:03.165971 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:05:03.244257 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:05:03.244285 46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
I0823 19:05:03.244298 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:05:05.761659 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:05:05.762316 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:05:05.762370 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:05:05.762419 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:05:05.779566 46108 cri.go:89] found id: "06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
I0823 19:05:05.779591 46108 cri.go:89] found id: ""
I0823 19:05:05.779600 46108 logs.go:284] 1 containers: [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc]
I0823 19:05:05.779656 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:05.784035 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:05:05.784095 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:05:05.800022 46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:05:05.800051 46108 cri.go:89] found id: ""
I0823 19:05:05.800060 46108 logs.go:284] 1 containers: [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
I0823 19:05:05.800105 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:05.803608 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:05:05.803656 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:05:05.819262 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:05:05.819279 46108 cri.go:89] found id: ""
I0823 19:05:05.819285 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:05:05.819329 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:05.823503 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:05:05.823567 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:05:05.841133 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:05:05.841149 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:05:05.841153 46108 cri.go:89] found id: ""
I0823 19:05:05.841159 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:05:05.841209 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:05.845110 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:05.848624 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:05:05.848669 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:05:05.865134 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:05:05.865159 46108 cri.go:89] found id: ""
I0823 19:05:05.865167 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:05:05.865209 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:05.869288 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:05:05.869355 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:05:05.885859 46108 cri.go:89] found id: "5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
I0823 19:05:05.885892 46108 cri.go:89] found id: ""
I0823 19:05:05.885901 46108 logs.go:284] 1 containers: [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636]
I0823 19:05:05.885961 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:05.889755 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:05:05.889817 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:05:05.906717 46108 cri.go:89] found id: ""
I0823 19:05:05.906757 46108 logs.go:284] 0 containers: []
W0823 19:05:05.906768 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:05:05.906775 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:05:05.906832 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:05:05.921435 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:05:05.921460 46108 cri.go:89] found id: ""
I0823 19:05:05.921468 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:05:05.921524 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:05.925488 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:05:05.925512 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:05:05.935886 46108 logs.go:123] Gathering logs for kube-apiserver [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc] ...
I0823 19:05:05.935911 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
I0823 19:05:05.955300 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:05:05.955338 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:05:05.984584 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:05:05.984612 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:05:06.006430 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:05:06.006457 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:05:06.071360 46108 logs.go:123] Gathering logs for container status ...
I0823 19:05:06.071397 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:05:06.098227 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:05:06.098251 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:05:06.164341 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:05:06.164381 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:05:06.250145 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:05:06.250170 46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
I0823 19:05:06.250185 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:05:06.268486 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:05:06.268517 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:05:06.308798 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:05:06.308831 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:05:06.338182 46108 logs.go:123] Gathering logs for kube-controller-manager [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636] ...
I0823 19:05:06.338213 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
I0823 19:05:06.368647 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:05:06.368679 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:05:08.885982 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:05:08.886569 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:05:08.886613 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:05:08.886657 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:05:08.903825 46108 cri.go:89] found id: "06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
I0823 19:05:08.903851 46108 cri.go:89] found id: ""
I0823 19:05:08.903861 46108 logs.go:284] 1 containers: [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc]
I0823 19:05:08.903920 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:08.908376 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:05:08.908439 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:05:08.925898 46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:05:08.925924 46108 cri.go:89] found id: ""
I0823 19:05:08.925930 46108 logs.go:284] 1 containers: [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
I0823 19:05:08.925988 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:08.930245 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:05:08.930315 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:05:08.947198 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:05:08.947223 46108 cri.go:89] found id: ""
I0823 19:05:08.947231 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:05:08.947290 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:08.951593 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:05:08.951657 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:05:08.972355 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:05:08.972382 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:05:08.972389 46108 cri.go:89] found id: ""
I0823 19:05:08.972398 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:05:08.972460 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:08.977006 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:08.981381 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:05:08.981450 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:05:08.997591 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:05:08.997617 46108 cri.go:89] found id: ""
I0823 19:05:08.997626 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:05:08.997681 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:09.001971 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:05:09.002020 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:05:09.019841 46108 cri.go:89] found id: "5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
I0823 19:05:09.019864 46108 cri.go:89] found id: ""
I0823 19:05:09.019873 46108 logs.go:284] 1 containers: [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636]
I0823 19:05:09.019931 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:09.024703 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:05:09.024770 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:05:09.041032 46108 cri.go:89] found id: ""
I0823 19:05:09.041059 46108 logs.go:284] 0 containers: []
W0823 19:05:09.041069 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:05:09.041077 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:05:09.041134 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:05:09.061258 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:05:09.061283 46108 cri.go:89] found id: ""
I0823 19:05:09.061292 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:05:09.061347 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:09.065515 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:05:09.065556 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:05:09.132588 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:05:09.132632 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:05:09.143795 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:05:09.143825 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:05:09.227916 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:05:09.227941 46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
I0823 19:05:09.227954 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:05:09.245188 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:05:09.245216 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:05:09.264861 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:05:09.264889 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:05:09.302495 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:05:09.302530 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:05:09.323552 46108 logs.go:123] Gathering logs for kube-controller-manager [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636] ...
I0823 19:05:09.323582 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
I0823 19:05:09.356325 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:05:09.356361 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:05:09.373837 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:05:09.373863 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:05:09.440687 46108 logs.go:123] Gathering logs for container status ...
I0823 19:05:09.440724 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:05:09.467916 46108 logs.go:123] Gathering logs for kube-apiserver [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc] ...
I0823 19:05:09.467946 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
I0823 19:05:09.491139 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:05:09.491169 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:05:12.024943 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:05:12.025611 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:05:12.025661 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:05:12.025709 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:05:12.043465 46108 cri.go:89] found id: "06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
I0823 19:05:12.043484 46108 cri.go:89] found id: ""
I0823 19:05:12.043490 46108 logs.go:284] 1 containers: [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc]
I0823 19:05:12.043530 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:12.047731 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:05:12.047801 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:05:12.063462 46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:05:12.063485 46108 cri.go:89] found id: ""
I0823 19:05:12.063493 46108 logs.go:284] 1 containers: [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
I0823 19:05:12.063535 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:12.067085 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:05:12.067139 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:05:12.085253 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:05:12.085274 46108 cri.go:89] found id: ""
I0823 19:05:12.085281 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:05:12.085333 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:12.089135 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:05:12.089194 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:05:12.105633 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:05:12.105653 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:05:12.105661 46108 cri.go:89] found id: ""
I0823 19:05:12.105669 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:05:12.105739 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:12.109681 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:12.113420 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:05:12.113480 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:05:12.128387 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:05:12.128405 46108 cri.go:89] found id: ""
I0823 19:05:12.128413 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:05:12.128469 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:12.132576 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:05:12.132637 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:05:12.150115 46108 cri.go:89] found id: "5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
I0823 19:05:12.150134 46108 cri.go:89] found id: ""
I0823 19:05:12.150141 46108 logs.go:284] 1 containers: [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636]
I0823 19:05:12.150179 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:12.154174 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:05:12.154236 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:05:12.170640 46108 cri.go:89] found id: ""
I0823 19:05:12.170660 46108 logs.go:284] 0 containers: []
W0823 19:05:12.170666 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:05:12.170671 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:05:12.170725 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:05:12.188064 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:05:12.188086 46108 cri.go:89] found id: ""
I0823 19:05:12.188098 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:05:12.188156 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:12.192292 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:05:12.192310 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:05:12.213933 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:05:12.213960 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:05:12.253648 46108 logs.go:123] Gathering logs for kube-controller-manager [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636] ...
I0823 19:05:12.253679 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
I0823 19:05:12.291294 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:05:12.291329 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:05:12.355231 46108 logs.go:123] Gathering logs for container status ...
I0823 19:05:12.355266 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:05:12.383271 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:05:12.383298 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:05:12.475229 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:05:12.475255 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:05:12.475269 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:05:12.487782 46108 logs.go:123] Gathering logs for kube-apiserver [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc] ...
I0823 19:05:12.487816 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
I0823 19:05:12.507091 46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
I0823 19:05:12.507131 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:05:12.527032 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:05:12.527058 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:05:12.552328 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:05:12.552373 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:05:12.587768 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:05:12.587798 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:05:12.606889 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:05:12.606922 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:05:15.182524 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:05:15.183169 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:05:15.183216 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:05:15.183261 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:05:15.207047 46108 cri.go:89] found id: "06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
I0823 19:05:15.207068 46108 cri.go:89] found id: ""
I0823 19:05:15.207077 46108 logs.go:284] 1 containers: [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc]
I0823 19:05:15.207131 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:15.213209 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:05:15.213267 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:05:15.234240 46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:05:15.234260 46108 cri.go:89] found id: ""
I0823 19:05:15.234269 46108 logs.go:284] 1 containers: [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
I0823 19:05:15.234318 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:15.242169 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:05:15.242220 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:05:15.271466 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:05:15.271487 46108 cri.go:89] found id: ""
I0823 19:05:15.271493 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:05:15.271534 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:15.276970 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:05:15.277041 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:05:15.300819 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:05:15.300843 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:05:15.300849 46108 cri.go:89] found id: ""
I0823 19:05:15.300857 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:05:15.300916 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:15.306646 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:15.311576 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:05:15.311645 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:05:15.331413 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:05:15.331440 46108 cri.go:89] found id: ""
I0823 19:05:15.331450 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:05:15.331506 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:15.336009 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:05:15.336080 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:05:15.359492 46108 cri.go:89] found id: "5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
I0823 19:05:15.359517 46108 cri.go:89] found id: ""
I0823 19:05:15.359525 46108 logs.go:284] 1 containers: [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636]
I0823 19:05:15.359582 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:15.363943 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:05:15.364004 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:05:15.382026 46108 cri.go:89] found id: ""
I0823 19:05:15.382059 46108 logs.go:284] 0 containers: []
W0823 19:05:15.382068 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:05:15.382076 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:05:15.382144 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:05:15.404262 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:05:15.404285 46108 cri.go:89] found id: ""
I0823 19:05:15.404293 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:05:15.404355 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:15.408577 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:05:15.408605 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:05:15.439760 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:05:15.439793 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:05:15.459344 46108 logs.go:123] Gathering logs for container status ...
I0823 19:05:15.459375 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:05:15.495240 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:05:15.495277 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:05:15.521891 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:05:15.521931 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:05:15.564880 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:05:15.564920 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:05:15.617801 46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
I0823 19:05:15.617849 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:05:15.639318 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:05:15.639352 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:05:15.681687 46108 logs.go:123] Gathering logs for kube-controller-manager [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636] ...
I0823 19:05:15.681718 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
I0823 19:05:15.726114 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:05:15.726160 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:05:15.806872 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:05:15.806907 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:05:15.882726 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:05:15.882761 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:05:15.982318 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:05:15.982344 46108 logs.go:123] Gathering logs for kube-apiserver [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc] ...
I0823 19:05:15.982354 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
I0823 19:05:18.507719 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:05:18.508353 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:05:18.508410 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:05:18.508466 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:05:18.526666 46108 cri.go:89] found id: "06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
I0823 19:05:18.526688 46108 cri.go:89] found id: ""
I0823 19:05:18.526696 46108 logs.go:284] 1 containers: [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc]
I0823 19:05:18.526746 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:18.531373 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:05:18.531429 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:05:18.550481 46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:05:18.550510 46108 cri.go:89] found id: ""
I0823 19:05:18.550522 46108 logs.go:284] 1 containers: [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
I0823 19:05:18.550575 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:18.556364 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:05:18.556426 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:05:18.575797 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:05:18.575815 46108 cri.go:89] found id: ""
I0823 19:05:18.575822 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:05:18.575862 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:18.579786 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:05:18.579859 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:05:18.599732 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:05:18.599755 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:05:18.599759 46108 cri.go:89] found id: ""
I0823 19:05:18.599765 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:05:18.599808 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:18.604070 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:18.608517 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:05:18.608591 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:05:18.631581 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:05:18.631616 46108 cri.go:89] found id: ""
I0823 19:05:18.631624 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:05:18.631684 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:18.636076 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:05:18.636142 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:05:18.651058 46108 cri.go:89] found id: "5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
I0823 19:05:18.651076 46108 cri.go:89] found id: ""
I0823 19:05:18.651084 46108 logs.go:284] 1 containers: [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636]
I0823 19:05:18.651138 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:18.654657 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:05:18.654705 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:05:18.671712 46108 cri.go:89] found id: ""
I0823 19:05:18.671740 46108 logs.go:284] 0 containers: []
W0823 19:05:18.671751 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:05:18.671759 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:05:18.671812 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:05:18.692729 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:05:18.692753 46108 cri.go:89] found id: ""
I0823 19:05:18.692762 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:05:18.692811 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:18.697295 46108 logs.go:123] Gathering logs for kube-apiserver [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc] ...
I0823 19:05:18.697314 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
I0823 19:05:18.719318 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:05:18.719346 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:05:18.739487 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:05:18.739514 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:05:18.761602 46108 logs.go:123] Gathering logs for kube-controller-manager [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636] ...
I0823 19:05:18.761635 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
I0823 19:05:18.798623 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:05:18.798654 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:05:18.870646 46108 logs.go:123] Gathering logs for container status ...
I0823 19:05:18.870689 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:05:18.895869 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:05:18.895902 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:05:18.976150 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:05:18.976189 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:05:18.987961 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:05:18.987989 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:05:19.081616 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:05:19.081644 46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
I0823 19:05:19.081654 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:05:19.100113 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:05:19.100151 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:05:19.142367 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:05:19.142405 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:05:19.186469 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:05:19.186509 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:05:21.709446 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:05:26.710599 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0823 19:05:26.710673 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:05:26.710732 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:05:26.729493 46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:05:26.729517 46108 cri.go:89] found id: "06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
I0823 19:05:26.729523 46108 cri.go:89] found id: ""
I0823 19:05:26.729531 46108 logs.go:284] 2 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc]
I0823 19:05:26.729593 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:26.734154 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:26.738569 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:05:26.738622 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:05:26.756621 46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:05:26.756640 46108 cri.go:89] found id: ""
I0823 19:05:26.756649 46108 logs.go:284] 1 containers: [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
I0823 19:05:26.756704 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:26.761233 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:05:26.761289 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:05:26.781902 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:05:26.781929 46108 cri.go:89] found id: ""
I0823 19:05:26.781939 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:05:26.781997 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:26.790699 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:05:26.790749 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:05:26.813784 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:05:26.813811 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:05:26.813818 46108 cri.go:89] found id: ""
I0823 19:05:26.813827 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:05:26.813877 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:26.818490 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:26.823145 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:05:26.823202 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:05:26.845567 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:05:26.845592 46108 cri.go:89] found id: ""
I0823 19:05:26.845601 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:05:26.845655 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:26.850360 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:05:26.850426 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:05:26.870395 46108 cri.go:89] found id: "5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
I0823 19:05:26.870419 46108 cri.go:89] found id: ""
I0823 19:05:26.870428 46108 logs.go:284] 1 containers: [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636]
I0823 19:05:26.870475 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:26.876101 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:05:26.876167 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:05:26.899479 46108 cri.go:89] found id: ""
I0823 19:05:26.899504 46108 logs.go:284] 0 containers: []
W0823 19:05:26.899515 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:05:26.899523 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:05:26.899589 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:05:26.927928 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:05:26.927952 46108 cri.go:89] found id: ""
I0823 19:05:26.927970 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:05:26.928027 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:26.933943 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:05:26.933972 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:05:27.021878 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:05:27.021911 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0823 19:05:37.143963 46108 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.122030093s)
W0823 19:05:37.144006 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
Unable to connect to the server: net/http: TLS handshake timeout
output:
** stderr **
Unable to connect to the server: net/http: TLS handshake timeout
** /stderr **
I0823 19:05:37.144019 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:05:37.144031 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:05:37.169949 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:05:37.169989 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:05:37.206167 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:05:37.206202 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:05:37.225475 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:05:37.225503 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:05:37.239639 46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
I0823 19:05:37.239673 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:05:37.261955 46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
I0823 19:05:37.261991 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:05:37.278927 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:05:37.278955 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:05:37.309040 46108 logs.go:123] Gathering logs for container status ...
I0823 19:05:37.309068 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:05:37.334854 46108 logs.go:123] Gathering logs for kube-apiserver [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc] ...
I0823 19:05:37.334892 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
I0823 19:05:37.362211 46108 logs.go:123] Gathering logs for kube-controller-manager [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636] ...
I0823 19:05:37.362245 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
I0823 19:05:37.395147 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:05:37.395178 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:05:37.461867 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:05:37.461902 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:05:40.005020 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:05:41.846446 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": read tcp 192.168.61.1:56544->192.168.61.47:8443: read: connection reset by peer
I0823 19:05:41.846514 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:05:41.846577 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:05:41.866322 46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:05:41.866353 46108 cri.go:89] found id: "06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
I0823 19:05:41.866360 46108 cri.go:89] found id: ""
I0823 19:05:41.866369 46108 logs.go:284] 2 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc]
I0823 19:05:41.866451 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:41.870940 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:41.875236 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:05:41.875303 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:05:41.894877 46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:05:41.894903 46108 cri.go:89] found id: ""
I0823 19:05:41.894911 46108 logs.go:284] 1 containers: [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
I0823 19:05:41.894962 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:41.903269 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:05:41.903332 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:05:41.927076 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:05:41.927096 46108 cri.go:89] found id: ""
I0823 19:05:41.927103 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:05:41.927146 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:41.933333 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:05:41.933406 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:05:41.951576 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:05:41.951601 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:05:41.951607 46108 cri.go:89] found id: ""
I0823 19:05:41.951615 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:05:41.951674 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:41.958235 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:41.963263 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:05:41.963326 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:05:41.981994 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:05:41.982019 46108 cri.go:89] found id: ""
I0823 19:05:41.982026 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:05:41.982081 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:41.986871 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:05:41.986931 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:05:42.004018 46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:05:42.004036 46108 cri.go:89] found id: "5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
I0823 19:05:42.004040 46108 cri.go:89] found id: ""
I0823 19:05:42.004045 46108 logs.go:284] 2 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636]
I0823 19:05:42.004110 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:42.008132 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:42.011951 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:05:42.011996 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:05:42.031705 46108 cri.go:89] found id: ""
I0823 19:05:42.031725 46108 logs.go:284] 0 containers: []
W0823 19:05:42.031735 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:05:42.031743 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:05:42.031805 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:05:42.050488 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:05:42.050510 46108 cri.go:89] found id: ""
I0823 19:05:42.050519 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:05:42.050573 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:42.054572 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:05:42.054592 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:05:42.065667 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:05:42.065697 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:05:42.145190 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:05:42.145220 46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
I0823 19:05:42.145234 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:05:42.165642 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:05:42.165670 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:05:42.189613 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:05:42.189645 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:05:42.211684 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:05:42.211711 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:05:42.271145 46108 logs.go:123] Gathering logs for kube-apiserver [06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc] ...
I0823 19:05:42.271182 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 06d3eb9c8947fd622a50f29bafedaad5f1d8a7f3edd388faa497da4db4215edc"
I0823 19:05:42.290971 46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
I0823 19:05:42.290999 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:05:42.310571 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:05:42.310597 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:05:42.328444 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:05:42.328475 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:05:42.362660 46108 logs.go:123] Gathering logs for kube-controller-manager [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636] ...
I0823 19:05:42.362692 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
I0823 19:05:42.392622 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:05:42.392649 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:05:42.455590 46108 logs.go:123] Gathering logs for container status ...
I0823 19:05:42.455621 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:05:42.481796 46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
I0823 19:05:42.481826 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:05:42.498843 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:05:42.498871 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:05:45.032605 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:05:45.033208 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:05:45.033258 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:05:45.033302 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:05:45.050497 46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:05:45.050522 46108 cri.go:89] found id: ""
I0823 19:05:45.050531 46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
I0823 19:05:45.050593 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:45.055383 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:05:45.055444 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:05:45.077342 46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:05:45.077366 46108 cri.go:89] found id: ""
I0823 19:05:45.077373 46108 logs.go:284] 1 containers: [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
I0823 19:05:45.077426 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:45.082934 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:05:45.083006 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:05:45.102586 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:05:45.102610 46108 cri.go:89] found id: ""
I0823 19:05:45.102619 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:05:45.102677 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:45.106802 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:05:45.106882 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:05:45.125732 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:05:45.125759 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:05:45.125765 46108 cri.go:89] found id: ""
I0823 19:05:45.125774 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:05:45.125831 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:45.130533 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:45.136227 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:05:45.136289 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:05:45.155736 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:05:45.155762 46108 cri.go:89] found id: ""
I0823 19:05:45.155769 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:05:45.155822 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:45.160563 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:05:45.160635 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:05:45.177406 46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:05:45.177433 46108 cri.go:89] found id: "5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
I0823 19:05:45.177440 46108 cri.go:89] found id: ""
I0823 19:05:45.177448 46108 logs.go:284] 2 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636]
I0823 19:05:45.177506 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:45.182054 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:45.186013 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:05:45.186084 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:05:45.202267 46108 cri.go:89] found id: ""
I0823 19:05:45.202294 46108 logs.go:284] 0 containers: []
W0823 19:05:45.202308 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:05:45.202316 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:05:45.202378 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:05:45.223935 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:05:45.223954 46108 cri.go:89] found id: ""
I0823 19:05:45.223960 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:05:45.224013 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:45.232380 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:05:45.232413 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:05:45.284220 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:05:45.284256 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:05:45.298376 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:05:45.298404 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:05:45.328296 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:05:45.328344 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:05:45.354436 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:05:45.354471 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:05:45.388543 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:05:45.388578 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:05:45.407329 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:05:45.407364 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:05:45.504343 46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
I0823 19:05:45.504374 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:05:45.547849 46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
I0823 19:05:45.547884 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:05:45.570491 46108 logs.go:123] Gathering logs for kube-controller-manager [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636] ...
I0823 19:05:45.570519 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
I0823 19:05:45.605456 46108 logs.go:123] Gathering logs for container status ...
I0823 19:05:45.605487 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:05:45.633417 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:05:45.633445 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:05:45.706675 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:05:45.706713 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:05:45.797573 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:05:45.797598 46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
I0823 19:05:45.797609 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:05:48.321562 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:05:48.322150 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:05:48.322203 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:05:48.322261 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:05:48.339493 46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:05:48.339517 46108 cri.go:89] found id: ""
I0823 19:05:48.339527 46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
I0823 19:05:48.339585 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:48.343895 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:05:48.343962 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:05:48.373419 46108 cri.go:89] found id: "c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
I0823 19:05:48.373445 46108 cri.go:89] found id: "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
I0823 19:05:48.373452 46108 cri.go:89] found id: ""
I0823 19:05:48.373462 46108 logs.go:284] 2 containers: [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]
I0823 19:05:48.373521 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:48.377952 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:48.383096 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:05:48.383167 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:05:48.398715 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:05:48.398736 46108 cri.go:89] found id: ""
I0823 19:05:48.398744 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:05:48.398813 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:48.402949 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:05:48.403013 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:05:48.426893 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:05:48.426917 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:05:48.426923 46108 cri.go:89] found id: ""
I0823 19:05:48.426932 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:05:48.426991 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:48.431665 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:48.435748 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:05:48.435810 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:05:48.452955 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:05:48.452974 46108 cri.go:89] found id: ""
I0823 19:05:48.452981 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:05:48.453020 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:48.457345 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:05:48.457412 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:05:48.477455 46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:05:48.477476 46108 cri.go:89] found id: "5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
I0823 19:05:48.477482 46108 cri.go:89] found id: ""
I0823 19:05:48.477491 46108 logs.go:284] 2 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636]
I0823 19:05:48.477559 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:48.482041 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:48.486974 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:05:48.487028 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:05:48.507380 46108 cri.go:89] found id: ""
I0823 19:05:48.507406 46108 logs.go:284] 0 containers: []
W0823 19:05:48.507417 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:05:48.507425 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:05:48.507496 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:05:48.525464 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:05:48.525490 46108 cri.go:89] found id: ""
I0823 19:05:48.525500 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:05:48.525577 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:48.529762 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:05:48.529790 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:05:48.621352 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:05:48.621384 46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
I0823 19:05:48.621399 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:05:48.656553 46108 logs.go:123] Gathering logs for etcd [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922] ...
I0823 19:05:48.656584 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
I0823 19:05:48.674634 46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
I0823 19:05:48.674665 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:05:48.691778 46108 logs.go:123] Gathering logs for kube-controller-manager [5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636] ...
I0823 19:05:48.691812 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 5cc9d21fe9021ae7293b1b2a8bf867cacbfa273fa9b2120cfad8fc18a8d52636"
I0823 19:05:48.728246 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:05:48.728279 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:05:48.752383 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:05:48.752413 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:05:48.775863 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:05:48.775896 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:05:48.807936 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:05:48.807976 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:05:48.869466 46108 logs.go:123] Gathering logs for container status ...
I0823 19:05:48.869500 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:05:48.890400 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:05:48.890430 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:05:48.952391 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:05:48.952428 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:05:48.963271 46108 logs.go:123] Gathering logs for etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d] ...
I0823 19:05:48.963290 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d"
W0823 19:05:48.980707 46108 logs.go:130] failed etcd [ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d]: command: /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d" /bin/bash -c "sudo /bin/crictl logs --tail 400 ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d": Process exited with status 1
stdout:
stderr:
E0823 19:05:48.976612 8890 remote_runtime.go:329] ContainerStatus "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d" from runtime service failed: rpc error: code = NotFound desc = an error occurred when try to find container "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d": not found
time="2023-08-23T19:05:48Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d\": not found"
output:
** stderr **
E0823 19:05:48.976612 8890 remote_runtime.go:329] ContainerStatus "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d" from runtime service failed: rpc error: code = NotFound desc = an error occurred when try to find container "ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d": not found
time="2023-08-23T19:05:48Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"ce6911affd915943b79016bd13bc0eef3ebbb001f5e68cf81f7ee16d93d8872d\": not found"
** /stderr **
I0823 19:05:48.980741 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:05:48.980754 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:05:49.017331 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:05:49.017367 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:05:51.536443 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:05:51.537122 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:05:51.537181 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:05:51.537238 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:05:51.555402 46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:05:51.555434 46108 cri.go:89] found id: ""
I0823 19:05:51.555441 46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
I0823 19:05:51.555494 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:51.559708 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:05:51.559780 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:05:51.582970 46108 cri.go:89] found id: "c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
I0823 19:05:51.582994 46108 cri.go:89] found id: ""
I0823 19:05:51.583002 46108 logs.go:284] 1 containers: [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922]
I0823 19:05:51.583060 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:51.587385 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:05:51.587451 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:05:51.606721 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:05:51.606749 46108 cri.go:89] found id: ""
I0823 19:05:51.606758 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:05:51.606817 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:51.611199 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:05:51.611279 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:05:51.629690 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:05:51.629711 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:05:51.629715 46108 cri.go:89] found id: ""
I0823 19:05:51.629721 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:05:51.629781 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:51.635016 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:51.639061 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:05:51.639127 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:05:51.656536 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:05:51.656561 46108 cri.go:89] found id: ""
I0823 19:05:51.656569 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:05:51.656622 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:51.660991 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:05:51.661060 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:05:51.677675 46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:05:51.677699 46108 cri.go:89] found id: ""
I0823 19:05:51.677707 46108 logs.go:284] 1 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12]
I0823 19:05:51.677763 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:51.682316 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:05:51.682381 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:05:51.698101 46108 cri.go:89] found id: ""
I0823 19:05:51.698128 46108 logs.go:284] 0 containers: []
W0823 19:05:51.698138 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:05:51.698145 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:05:51.698198 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:05:51.717967 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:05:51.717992 46108 cri.go:89] found id: ""
I0823 19:05:51.718000 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:05:51.718059 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:51.724446 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:05:51.724469 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:05:51.736002 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:05:51.736028 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:05:51.822206 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:05:51.822233 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:05:51.822252 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:05:51.851889 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:05:51.851921 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:05:51.870203 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:05:51.870227 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:05:51.937335 46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
I0823 19:05:51.937365 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:05:51.975724 46108 logs.go:123] Gathering logs for container status ...
I0823 19:05:51.975760 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:05:52.003943 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:05:52.003970 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:05:52.062343 46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
I0823 19:05:52.062376 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:05:52.085086 46108 logs.go:123] Gathering logs for etcd [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922] ...
I0823 19:05:52.085115 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
I0823 19:05:52.101871 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:05:52.101898 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:05:52.142589 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:05:52.142615 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:05:52.167852 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:05:52.167888 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:05:54.703424 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:05:54.704054 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:05:54.704118 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:05:54.704180 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:05:54.728659 46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:05:54.728684 46108 cri.go:89] found id: ""
I0823 19:05:54.728693 46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
I0823 19:05:54.728797 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:54.735292 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:05:54.735361 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:05:54.754779 46108 cri.go:89] found id: "c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
I0823 19:05:54.754806 46108 cri.go:89] found id: ""
I0823 19:05:54.754816 46108 logs.go:284] 1 containers: [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922]
I0823 19:05:54.754878 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:54.759465 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:05:54.759520 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:05:54.788532 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:05:54.788556 46108 cri.go:89] found id: ""
I0823 19:05:54.788566 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:05:54.788621 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:54.794260 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:05:54.794329 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:05:54.820790 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:05:54.820819 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:05:54.820831 46108 cri.go:89] found id: ""
I0823 19:05:54.820840 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:05:54.820895 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:54.827024 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:54.833001 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:05:54.833093 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:05:54.856210 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:05:54.856234 46108 cri.go:89] found id: ""
I0823 19:05:54.856243 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:05:54.856298 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:54.861399 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:05:54.861456 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:05:54.883432 46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:05:54.883459 46108 cri.go:89] found id: ""
I0823 19:05:54.883468 46108 logs.go:284] 1 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12]
I0823 19:05:54.883529 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:54.889339 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:05:54.889425 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:05:54.909342 46108 cri.go:89] found id: ""
I0823 19:05:54.909374 46108 logs.go:284] 0 containers: []
W0823 19:05:54.909385 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:05:54.909392 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:05:54.909454 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:05:54.934585 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:05:54.934608 46108 cri.go:89] found id: ""
I0823 19:05:54.934616 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:05:54.934686 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:54.939394 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:05:54.939421 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:05:55.010420 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:05:55.010452 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:05:55.024763 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:05:55.024800 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:05:55.051548 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:05:55.051577 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:05:55.089386 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:05:55.089425 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:05:55.182639 46108 logs.go:123] Gathering logs for container status ...
I0823 19:05:55.182676 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:05:55.215920 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:05:55.215970 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:05:55.243850 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:05:55.243890 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:05:55.365367 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:05:55.365394 46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
I0823 19:05:55.365409 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:05:55.391596 46108 logs.go:123] Gathering logs for etcd [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922] ...
I0823 19:05:55.391634 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
I0823 19:05:55.413706 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:05:55.413737 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:05:55.461567 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:05:55.461599 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:05:55.505222 46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
I0823 19:05:55.505254 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:05:58.049534 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:05:58.050212 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:05:58.050263 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:05:58.050318 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:05:58.069075 46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:05:58.069103 46108 cri.go:89] found id: ""
I0823 19:05:58.069112 46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
I0823 19:05:58.069172 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:58.073772 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:05:58.073840 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:05:58.090025 46108 cri.go:89] found id: "c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
I0823 19:05:58.090050 46108 cri.go:89] found id: ""
I0823 19:05:58.090058 46108 logs.go:284] 1 containers: [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922]
I0823 19:05:58.090113 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:58.094442 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:05:58.094511 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:05:58.119228 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:05:58.119250 46108 cri.go:89] found id: ""
I0823 19:05:58.119258 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:05:58.119310 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:58.125716 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:05:58.125789 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:05:58.146238 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:05:58.146276 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:05:58.146289 46108 cri.go:89] found id: ""
I0823 19:05:58.146297 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:05:58.146353 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:58.152091 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:58.157411 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:05:58.157483 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:05:58.174659 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:05:58.174692 46108 cri.go:89] found id: ""
I0823 19:05:58.174702 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:05:58.174760 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:58.179755 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:05:58.179830 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:05:58.205206 46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:05:58.205227 46108 cri.go:89] found id: ""
I0823 19:05:58.205234 46108 logs.go:284] 1 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12]
I0823 19:05:58.205285 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:58.211124 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:05:58.211201 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:05:58.230651 46108 cri.go:89] found id: ""
I0823 19:05:58.230692 46108 logs.go:284] 0 containers: []
W0823 19:05:58.230703 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:05:58.230720 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:05:58.230786 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:05:58.255736 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:05:58.255764 46108 cri.go:89] found id: ""
I0823 19:05:58.255773 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:05:58.255835 46108 ssh_runner.go:195] Run: which crictl
I0823 19:05:58.260228 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:05:58.260256 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:05:58.290658 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:05:58.290703 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:05:58.313231 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:05:58.313266 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:05:58.405561 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:05:58.405600 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:05:58.494309 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:05:58.494337 46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
I0823 19:05:58.494350 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:05:58.525794 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:05:58.525832 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:05:58.552023 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:05:58.552057 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:05:58.599056 46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
I0823 19:05:58.599101 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:05:58.640112 46108 logs.go:123] Gathering logs for container status ...
I0823 19:05:58.640147 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:05:58.673647 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:05:58.673675 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:05:58.745546 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:05:58.745581 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:05:58.758054 46108 logs.go:123] Gathering logs for etcd [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922] ...
I0823 19:05:58.758092 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
I0823 19:05:58.781280 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:05:58.781316 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:06:01.330698 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:06:01.331395 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:06:01.331452 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:06:01.331512 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:06:01.352439 46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:06:01.352456 46108 cri.go:89] found id: ""
I0823 19:06:01.352464 46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
I0823 19:06:01.352505 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:01.356431 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:06:01.356489 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:06:01.372362 46108 cri.go:89] found id: "c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
I0823 19:06:01.372381 46108 cri.go:89] found id: ""
I0823 19:06:01.372390 46108 logs.go:284] 1 containers: [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922]
I0823 19:06:01.372449 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:01.376304 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:06:01.376377 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:06:01.393905 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:06:01.393933 46108 cri.go:89] found id: ""
I0823 19:06:01.393942 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:06:01.394001 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:01.398219 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:06:01.398306 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:06:01.417133 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:06:01.417154 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:06:01.417158 46108 cri.go:89] found id: ""
I0823 19:06:01.417165 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:06:01.417218 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:01.422147 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:01.426098 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:06:01.426165 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:06:01.443501 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:06:01.443526 46108 cri.go:89] found id: ""
I0823 19:06:01.443536 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:06:01.443600 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:01.447775 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:06:01.447845 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:06:01.464437 46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:06:01.464465 46108 cri.go:89] found id: ""
I0823 19:06:01.464474 46108 logs.go:284] 1 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12]
I0823 19:06:01.464531 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:01.468649 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:06:01.468732 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:06:01.485157 46108 cri.go:89] found id: ""
I0823 19:06:01.485183 46108 logs.go:284] 0 containers: []
W0823 19:06:01.485194 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:06:01.485202 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:06:01.485263 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:06:01.502362 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:06:01.502389 46108 cri.go:89] found id: ""
I0823 19:06:01.502411 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:06:01.502468 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:01.507271 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:06:01.507353 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:06:01.535669 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:06:01.535698 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:06:01.558708 46108 logs.go:123] Gathering logs for container status ...
I0823 19:06:01.558740 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:06:01.591352 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:06:01.591377 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:06:01.666519 46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
I0823 19:06:01.666556 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:06:01.692114 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:06:01.692147 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:06:01.717823 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:06:01.717853 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:06:01.763825 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:06:01.763858 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:06:01.796442 46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
I0823 19:06:01.796489 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:06:01.830332 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:06:01.830363 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:06:01.897377 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:06:01.897412 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:06:01.909533 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:06:01.909570 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:06:01.991558 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:06:01.991587 46108 logs.go:123] Gathering logs for etcd [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922] ...
I0823 19:06:01.991606 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
I0823 19:06:04.509848 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:06:04.510506 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:06:04.510558 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:06:04.510621 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:06:04.529340 46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:06:04.529366 46108 cri.go:89] found id: ""
I0823 19:06:04.529375 46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
I0823 19:06:04.529427 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:04.535732 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:06:04.535803 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:06:04.553995 46108 cri.go:89] found id: "c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
I0823 19:06:04.554022 46108 cri.go:89] found id: ""
I0823 19:06:04.554029 46108 logs.go:284] 1 containers: [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922]
I0823 19:06:04.554076 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:04.557737 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:06:04.557817 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:06:04.573913 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:06:04.573938 46108 cri.go:89] found id: ""
I0823 19:06:04.573946 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:06:04.573998 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:04.577667 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:06:04.577724 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:06:04.596844 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:06:04.596866 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:06:04.596871 46108 cri.go:89] found id: ""
I0823 19:06:04.596880 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:06:04.596926 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:04.600759 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:04.605475 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:06:04.605551 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:06:04.625011 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:06:04.625035 46108 cri.go:89] found id: ""
I0823 19:06:04.625041 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:06:04.625083 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:04.633869 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:06:04.633934 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:06:04.654593 46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:06:04.654616 46108 cri.go:89] found id: ""
I0823 19:06:04.654624 46108 logs.go:284] 1 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12]
I0823 19:06:04.654682 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:04.658856 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:06:04.658924 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:06:04.678983 46108 cri.go:89] found id: ""
I0823 19:06:04.679004 46108 logs.go:284] 0 containers: []
W0823 19:06:04.679011 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:06:04.679017 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:06:04.679066 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:06:04.696276 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:06:04.696297 46108 cri.go:89] found id: ""
I0823 19:06:04.696306 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:06:04.696361 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:04.700244 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:06:04.700270 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:06:04.719858 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:06:04.719890 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:06:04.788211 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:06:04.788247 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:06:04.800580 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:06:04.800611 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:06:04.885821 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:06:04.885850 46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
I0823 19:06:04.885863 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:06:04.908380 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:06:04.908407 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:06:04.962523 46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
I0823 19:06:04.962565 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:06:04.998637 46108 logs.go:123] Gathering logs for container status ...
I0823 19:06:04.998684 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:06:05.020839 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:06:05.020883 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:06:05.094884 46108 logs.go:123] Gathering logs for etcd [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922] ...
I0823 19:06:05.094918 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
I0823 19:06:05.111421 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:06:05.111452 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:06:05.131504 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:06:05.131542 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:06:05.153371 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:06:05.153402 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:06:07.692356 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:06:07.693035 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:06:07.693094 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:06:07.693149 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:06:07.711744 46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:06:07.711761 46108 cri.go:89] found id: ""
I0823 19:06:07.711768 46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
I0823 19:06:07.711818 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:07.716262 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:06:07.716321 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:06:07.741478 46108 cri.go:89] found id: "c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
I0823 19:06:07.741503 46108 cri.go:89] found id: ""
I0823 19:06:07.741512 46108 logs.go:284] 1 containers: [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922]
I0823 19:06:07.741575 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:07.748187 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:06:07.748259 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:06:07.769321 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:06:07.769348 46108 cri.go:89] found id: ""
I0823 19:06:07.769357 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:06:07.769402 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:07.774609 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:06:07.774680 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:06:07.795679 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:06:07.795706 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:06:07.795713 46108 cri.go:89] found id: ""
I0823 19:06:07.795721 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:06:07.795777 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:07.800827 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:07.805586 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:06:07.805649 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:06:07.825329 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:06:07.825357 46108 cri.go:89] found id: ""
I0823 19:06:07.825366 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:06:07.825422 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:07.829581 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:06:07.829642 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:06:07.846776 46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:06:07.846801 46108 cri.go:89] found id: ""
I0823 19:06:07.846810 46108 logs.go:284] 1 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12]
I0823 19:06:07.846868 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:07.851255 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:06:07.851315 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:06:07.867535 46108 cri.go:89] found id: ""
I0823 19:06:07.867560 46108 logs.go:284] 0 containers: []
W0823 19:06:07.867574 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:06:07.867582 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:06:07.867640 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:06:07.884538 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:06:07.884563 46108 cri.go:89] found id: ""
I0823 19:06:07.884573 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:06:07.884635 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:07.889386 46108 logs.go:123] Gathering logs for container status ...
I0823 19:06:07.889415 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:06:07.919315 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:06:07.919343 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:06:08.009636 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:06:08.009670 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:06:08.099086 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:06:08.099113 46108 logs.go:123] Gathering logs for etcd [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922] ...
I0823 19:06:08.099131 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
I0823 19:06:08.117038 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:06:08.117071 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:06:08.164356 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:06:08.164394 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:06:08.208857 46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
I0823 19:06:08.208902 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:06:08.243681 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:06:08.243712 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:06:08.262818 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:06:08.262852 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:06:08.328349 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:06:08.328391 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:06:08.340217 46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
I0823 19:06:08.340243 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:06:08.362864 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:06:08.362896 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:06:08.386884 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:06:08.386910 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:06:10.909996 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:06:10.910647 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:06:10.910700 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:06:10.910764 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:06:10.932264 46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:06:10.932291 46108 cri.go:89] found id: ""
I0823 19:06:10.932299 46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
I0823 19:06:10.932357 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:10.937135 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:06:10.937207 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:06:10.966219 46108 cri.go:89] found id: "c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
I0823 19:06:10.966253 46108 cri.go:89] found id: ""
I0823 19:06:10.966263 46108 logs.go:284] 1 containers: [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922]
I0823 19:06:10.966318 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:10.971184 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:06:10.971266 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:06:10.994135 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:06:10.994160 46108 cri.go:89] found id: ""
I0823 19:06:10.994168 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:06:10.994228 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:10.999215 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:06:10.999284 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:06:11.017716 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:06:11.017739 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:06:11.017744 46108 cri.go:89] found id: ""
I0823 19:06:11.017752 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:06:11.017815 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:11.022288 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:11.026307 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:06:11.026379 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:06:11.047035 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:06:11.047061 46108 cri.go:89] found id: ""
I0823 19:06:11.047068 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:06:11.047120 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:11.051341 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:06:11.051421 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:06:11.071688 46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:06:11.071715 46108 cri.go:89] found id: ""
I0823 19:06:11.071724 46108 logs.go:284] 1 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12]
I0823 19:06:11.071782 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:11.075930 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:06:11.076007 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:06:11.092653 46108 cri.go:89] found id: ""
I0823 19:06:11.092679 46108 logs.go:284] 0 containers: []
W0823 19:06:11.092689 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:06:11.092697 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:06:11.092764 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:06:11.112201 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:06:11.112230 46108 cri.go:89] found id: ""
I0823 19:06:11.112240 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:06:11.112307 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:11.116802 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:06:11.116831 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:06:11.136593 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:06:11.136618 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:06:11.211132 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:06:11.211166 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:06:11.222746 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:06:11.222775 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:06:11.303168 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:06:11.303188 46108 logs.go:123] Gathering logs for etcd [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922] ...
I0823 19:06:11.303199 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
I0823 19:06:11.319114 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:06:11.319141 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:06:11.345675 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:06:11.345702 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:06:11.371184 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:06:11.371212 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:06:11.406205 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:06:11.406240 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:06:11.475694 46108 logs.go:123] Gathering logs for container status ...
I0823 19:06:11.475735 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:06:11.503636 46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
I0823 19:06:11.503666 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:06:11.523150 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:06:11.523180 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:06:11.562051 46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
I0823 19:06:11.562090 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:06:14.099505 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:06:14.100215 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:06:14.100261 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:06:14.100309 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:06:14.125582 46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:06:14.125611 46108 cri.go:89] found id: ""
I0823 19:06:14.125621 46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
I0823 19:06:14.125678 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:14.131327 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:06:14.131408 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:06:14.154601 46108 cri.go:89] found id: "c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
I0823 19:06:14.154628 46108 cri.go:89] found id: ""
I0823 19:06:14.154635 46108 logs.go:284] 1 containers: [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922]
I0823 19:06:14.154701 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:14.159514 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:06:14.159603 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:06:14.178540 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:06:14.178565 46108 cri.go:89] found id: ""
I0823 19:06:14.178573 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:06:14.178630 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:14.182950 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:06:14.183018 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:06:14.199646 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:06:14.199673 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:06:14.199677 46108 cri.go:89] found id: ""
I0823 19:06:14.199684 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:06:14.199735 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:14.204477 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:14.208343 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:06:14.208397 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:06:14.228214 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:06:14.228243 46108 cri.go:89] found id: ""
I0823 19:06:14.228251 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:06:14.228305 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:14.233399 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:06:14.233471 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:06:14.250578 46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:06:14.250607 46108 cri.go:89] found id: ""
I0823 19:06:14.250616 46108 logs.go:284] 1 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12]
I0823 19:06:14.250675 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:14.254830 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:06:14.254904 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:06:14.282730 46108 cri.go:89] found id: ""
I0823 19:06:14.282757 46108 logs.go:284] 0 containers: []
W0823 19:06:14.282774 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:06:14.282780 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:06:14.282838 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:06:14.300293 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:06:14.300321 46108 cri.go:89] found id: ""
I0823 19:06:14.300329 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:06:14.300386 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:14.304543 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:06:14.304571 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:06:14.350718 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:06:14.350752 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:06:14.367639 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:06:14.367673 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:06:14.435343 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:06:14.435382 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:06:14.447785 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:06:14.447815 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:06:14.532345 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:06:14.532378 46108 logs.go:123] Gathering logs for etcd [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922] ...
I0823 19:06:14.532393 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
I0823 19:06:14.551690 46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
I0823 19:06:14.551722 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:06:14.591905 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:06:14.591933 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:06:14.664723 46108 logs.go:123] Gathering logs for container status ...
I0823 19:06:14.664759 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:06:14.689075 46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
I0823 19:06:14.689103 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:06:14.713101 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:06:14.713143 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:06:14.745713 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:06:14.745750 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:06:14.773981 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:06:14.774022 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:06:17.312146 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:06:17.312806 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:06:17.312880 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:06:17.312938 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:06:17.330864 46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:06:17.330944 46108 cri.go:89] found id: ""
I0823 19:06:17.330969 46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
I0823 19:06:17.331059 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:17.335582 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:06:17.335638 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:06:17.353605 46108 cri.go:89] found id: "c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
I0823 19:06:17.353624 46108 cri.go:89] found id: ""
I0823 19:06:17.353631 46108 logs.go:284] 1 containers: [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922]
I0823 19:06:17.353675 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:17.357497 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:06:17.357577 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:06:17.377588 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:06:17.377626 46108 cri.go:89] found id: ""
I0823 19:06:17.377636 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:06:17.377696 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:17.382099 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:06:17.382161 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:06:17.401289 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:06:17.401312 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:06:17.401319 46108 cri.go:89] found id: ""
I0823 19:06:17.401327 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:06:17.401383 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:17.405299 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:17.409182 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:06:17.409248 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:06:17.427439 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:06:17.427459 46108 cri.go:89] found id: ""
I0823 19:06:17.427469 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:06:17.427519 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:17.431764 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:06:17.431821 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:06:17.448373 46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:06:17.448399 46108 cri.go:89] found id: ""
I0823 19:06:17.448416 46108 logs.go:284] 1 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12]
I0823 19:06:17.448476 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:17.452429 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:06:17.452481 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:06:17.469717 46108 cri.go:89] found id: ""
I0823 19:06:17.469740 46108 logs.go:284] 0 containers: []
W0823 19:06:17.469747 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:06:17.469753 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:06:17.469805 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:06:17.486112 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:06:17.486137 46108 cri.go:89] found id: ""
I0823 19:06:17.486145 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:06:17.486204 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:17.489962 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:06:17.489991 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:06:17.563739 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:06:17.563776 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:06:17.574269 46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
I0823 19:06:17.574299 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:06:17.596535 46108 logs.go:123] Gathering logs for etcd [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922] ...
I0823 19:06:17.596564 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
I0823 19:06:17.617233 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:06:17.617267 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:06:17.647718 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:06:17.647750 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:06:17.687097 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:06:17.687135 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:06:17.722196 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:06:17.722230 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:06:17.742181 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:06:17.742207 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:06:17.808340 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:06:17.808379 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:06:17.894472 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:06:17.894494 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:06:17.894507 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:06:17.916051 46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
I0823 19:06:17.916080 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:06:17.953461 46108 logs.go:123] Gathering logs for container status ...
I0823 19:06:17.953502 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:06:20.478759 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:06:20.479429 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:06:20.479472 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:06:20.479517 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:06:20.500015 46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:06:20.500052 46108 cri.go:89] found id: ""
I0823 19:06:20.500061 46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
I0823 19:06:20.500115 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:20.504199 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:06:20.504272 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:06:20.521497 46108 cri.go:89] found id: "c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
I0823 19:06:20.521523 46108 cri.go:89] found id: ""
I0823 19:06:20.521531 46108 logs.go:284] 1 containers: [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922]
I0823 19:06:20.521602 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:20.526129 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:06:20.526194 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:06:20.554028 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:06:20.554055 46108 cri.go:89] found id: ""
I0823 19:06:20.554064 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:06:20.554125 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:20.558290 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:06:20.558366 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:06:20.576745 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:06:20.576771 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:06:20.576775 46108 cri.go:89] found id: ""
I0823 19:06:20.576781 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:06:20.576835 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:20.581785 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:20.585852 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:06:20.585923 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:06:20.603803 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:06:20.603827 46108 cri.go:89] found id: ""
I0823 19:06:20.603834 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:06:20.603895 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:20.607978 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:06:20.608048 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:06:20.627666 46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:06:20.627686 46108 cri.go:89] found id: ""
I0823 19:06:20.627694 46108 logs.go:284] 1 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12]
I0823 19:06:20.627737 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:20.632181 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:06:20.632238 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:06:20.650205 46108 cri.go:89] found id: ""
I0823 19:06:20.650230 46108 logs.go:284] 0 containers: []
W0823 19:06:20.650240 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:06:20.650251 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:06:20.650308 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:06:20.668478 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:06:20.668500 46108 cri.go:89] found id: ""
I0823 19:06:20.668509 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:06:20.668562 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:20.673326 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:06:20.673354 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:06:20.714754 46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
I0823 19:06:20.714789 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:06:20.748997 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:06:20.749028 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:06:20.766798 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:06:20.766822 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:06:20.837409 46108 logs.go:123] Gathering logs for container status ...
I0823 19:06:20.837447 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:06:20.866229 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:06:20.866255 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:06:20.935944 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:06:20.935992 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:06:21.025154 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:06:21.025185 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:06:21.025200 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:06:21.058400 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:06:21.058433 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:06:21.084037 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:06:21.084070 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:06:21.122780 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:06:21.122812 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:06:21.134005 46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
I0823 19:06:21.134036 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:06:21.153320 46108 logs.go:123] Gathering logs for etcd [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922] ...
I0823 19:06:21.153349 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
I0823 19:06:23.670983 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:06:23.671729 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:06:23.671787 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:06:23.671839 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:06:23.690300 46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:06:23.690324 46108 cri.go:89] found id: ""
I0823 19:06:23.690333 46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
I0823 19:06:23.690391 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:23.695769 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:06:23.695840 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:06:23.713653 46108 cri.go:89] found id: "c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
I0823 19:06:23.713679 46108 cri.go:89] found id: ""
I0823 19:06:23.713687 46108 logs.go:284] 1 containers: [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922]
I0823 19:06:23.713739 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:23.717980 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:06:23.718047 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:06:23.742293 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:06:23.742318 46108 cri.go:89] found id: ""
I0823 19:06:23.742327 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:06:23.742382 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:23.746637 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:06:23.746688 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:06:23.764545 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:06:23.764564 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:06:23.764570 46108 cri.go:89] found id: ""
I0823 19:06:23.764578 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:06:23.764635 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:23.769385 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:23.773582 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:06:23.773644 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:06:23.789972 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:06:23.789991 46108 cri.go:89] found id: ""
I0823 19:06:23.789997 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:06:23.790041 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:23.794732 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:06:23.794841 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:06:23.813335 46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:06:23.813358 46108 cri.go:89] found id: ""
I0823 19:06:23.813367 46108 logs.go:284] 1 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12]
I0823 19:06:23.813424 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:23.817918 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:06:23.817992 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:06:23.836337 46108 cri.go:89] found id: ""
I0823 19:06:23.836365 46108 logs.go:284] 0 containers: []
W0823 19:06:23.836375 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:06:23.836383 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:06:23.836452 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:06:23.854760 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:06:23.854782 46108 cri.go:89] found id: ""
I0823 19:06:23.854791 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:06:23.854849 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:23.859227 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:06:23.859249 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:06:23.893656 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:06:23.893688 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:06:23.909502 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:06:23.909536 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:06:23.978095 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:06:23.978132 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:06:24.048140 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:06:24.048178 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:06:24.061139 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:06:24.061169 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:06:24.121262 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:06:24.121309 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:06:24.147113 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:06:24.147144 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:06:24.170655 46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
I0823 19:06:24.170688 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:06:24.203655 46108 logs.go:123] Gathering logs for container status ...
I0823 19:06:24.203687 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:06:24.232782 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:06:24.232815 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:06:24.328560 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:06:24.328591 46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
I0823 19:06:24.328606 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:06:24.349916 46108 logs.go:123] Gathering logs for etcd [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922] ...
I0823 19:06:24.349939 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
I0823 19:06:26.868754 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:06:26.869392 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:06:26.869451 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:06:26.869512 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:06:26.889223 46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:06:26.889246 46108 cri.go:89] found id: ""
I0823 19:06:26.889256 46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
I0823 19:06:26.889305 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:26.893591 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:06:26.893668 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:06:26.913170 46108 cri.go:89] found id: "c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
I0823 19:06:26.913197 46108 cri.go:89] found id: ""
I0823 19:06:26.913205 46108 logs.go:284] 1 containers: [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922]
I0823 19:06:26.913275 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:26.917480 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:06:26.917556 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:06:26.936063 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:06:26.936087 46108 cri.go:89] found id: ""
I0823 19:06:26.936093 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:06:26.936143 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:26.940882 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:06:26.940958 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:06:26.958927 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:06:26.958950 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:06:26.958956 46108 cri.go:89] found id: ""
I0823 19:06:26.958964 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:06:26.959019 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:26.963573 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:26.967483 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:06:26.967540 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:06:26.984382 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:06:26.984402 46108 cri.go:89] found id: ""
I0823 19:06:26.984410 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:06:26.984465 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:26.989408 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:06:26.989474 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:06:27.006689 46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:06:27.006707 46108 cri.go:89] found id: ""
I0823 19:06:27.006715 46108 logs.go:284] 1 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12]
I0823 19:06:27.006767 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:27.011886 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:06:27.011947 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:06:27.036215 46108 cri.go:89] found id: ""
I0823 19:06:27.036249 46108 logs.go:284] 0 containers: []
W0823 19:06:27.036263 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:06:27.036272 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:06:27.036337 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:06:27.064621 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:06:27.064644 46108 cri.go:89] found id: ""
I0823 19:06:27.064653 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:06:27.064708 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:27.070401 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:06:27.070427 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:06:27.134688 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:06:27.134723 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:06:27.147350 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:06:27.147375 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:06:27.195360 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:06:27.195395 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:06:27.277900 46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
I0823 19:06:27.277940 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:06:27.315975 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:06:27.316010 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:06:27.338544 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:06:27.338593 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:06:27.432654 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:06:27.432685 46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
I0823 19:06:27.432700 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:06:27.460779 46108 logs.go:123] Gathering logs for etcd [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922] ...
I0823 19:06:27.460815 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
I0823 19:06:27.488452 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:06:27.488490 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:06:27.517308 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:06:27.517346 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:06:27.578386 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:06:27.578438 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:06:27.609893 46108 logs.go:123] Gathering logs for container status ...
I0823 19:06:27.609932 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:06:30.155181 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:06:30.155920 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:06:30.155967 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:06:30.156024 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:06:30.180694 46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:06:30.180718 46108 cri.go:89] found id: ""
I0823 19:06:30.180724 46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
I0823 19:06:30.180783 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:30.186267 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:06:30.186347 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:06:30.217747 46108 cri.go:89] found id: "c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
I0823 19:06:30.217779 46108 cri.go:89] found id: ""
I0823 19:06:30.217788 46108 logs.go:284] 1 containers: [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922]
I0823 19:06:30.217848 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:30.223522 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:06:30.223599 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:06:30.246882 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:06:30.246908 46108 cri.go:89] found id: ""
I0823 19:06:30.246917 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:06:30.246974 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:30.251123 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:06:30.251187 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:06:30.269111 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:06:30.269137 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:06:30.269143 46108 cri.go:89] found id: ""
I0823 19:06:30.269151 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:06:30.269211 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:30.273823 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:30.278377 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:06:30.278432 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:06:30.297232 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:06:30.297254 46108 cri.go:89] found id: ""
I0823 19:06:30.297262 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:06:30.297314 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:30.301894 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:06:30.301969 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:06:30.320093 46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:06:30.320116 46108 cri.go:89] found id: ""
I0823 19:06:30.320124 46108 logs.go:284] 1 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12]
I0823 19:06:30.320185 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:30.324639 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:06:30.324705 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:06:30.343752 46108 cri.go:89] found id: ""
I0823 19:06:30.343779 46108 logs.go:284] 0 containers: []
W0823 19:06:30.343789 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:06:30.343796 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:06:30.343859 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:06:30.364451 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:06:30.364475 46108 cri.go:89] found id: ""
I0823 19:06:30.364484 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:06:30.364544 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:30.369280 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:06:30.369304 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:06:30.430949 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:06:30.430984 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:06:30.441745 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:06:30.441783 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:06:30.537527 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:06:30.537569 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:06:30.537588 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:06:30.562492 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:06:30.562522 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:06:30.596878 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:06:30.596912 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:06:30.662071 46108 logs.go:123] Gathering logs for container status ...
I0823 19:06:30.662106 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:06:30.691365 46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
I0823 19:06:30.691405 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:06:30.720807 46108 logs.go:123] Gathering logs for etcd [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922] ...
I0823 19:06:30.720842 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
I0823 19:06:30.744868 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:06:30.744895 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:06:30.790120 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:06:30.790157 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:06:30.829824 46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
I0823 19:06:30.829859 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:06:30.861421 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:06:30.861453 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:06:33.380925 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:06:33.381642 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:06:33.381692 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:06:33.381751 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:06:33.400140 46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:06:33.400159 46108 cri.go:89] found id: ""
I0823 19:06:33.400165 46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
I0823 19:06:33.400209 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:33.403915 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:06:33.403980 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:06:33.420690 46108 cri.go:89] found id: "c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
I0823 19:06:33.420715 46108 cri.go:89] found id: ""
I0823 19:06:33.420723 46108 logs.go:284] 1 containers: [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922]
I0823 19:06:33.420777 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:33.425119 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:06:33.425166 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:06:33.442477 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:06:33.442500 46108 cri.go:89] found id: ""
I0823 19:06:33.442507 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:06:33.442549 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:33.446734 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:06:33.446794 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:06:33.462854 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:06:33.462876 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:06:33.462883 46108 cri.go:89] found id: ""
I0823 19:06:33.462891 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:06:33.462941 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:33.466806 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:33.471050 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:06:33.471112 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:06:33.486208 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:06:33.486232 46108 cri.go:89] found id: ""
I0823 19:06:33.486240 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:06:33.486299 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:33.490066 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:06:33.490120 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:06:33.507910 46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:06:33.507930 46108 cri.go:89] found id: ""
I0823 19:06:33.507939 46108 logs.go:284] 1 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12]
I0823 19:06:33.508000 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:33.512488 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:06:33.512548 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:06:33.528403 46108 cri.go:89] found id: ""
I0823 19:06:33.528422 46108 logs.go:284] 0 containers: []
W0823 19:06:33.528429 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:06:33.528435 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:06:33.528489 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:06:33.548477 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:06:33.548496 46108 cri.go:89] found id: ""
I0823 19:06:33.548503 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:06:33.548563 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:33.552606 46108 logs.go:123] Gathering logs for etcd [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922] ...
I0823 19:06:33.552630 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
I0823 19:06:33.571960 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:06:33.571992 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:06:33.597784 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:06:33.597809 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:06:33.658944 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:06:33.658980 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:06:33.737079 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:06:33.737109 46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
I0823 19:06:33.737124 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:06:33.758694 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:06:33.758719 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:06:33.805837 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:06:33.805887 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:06:33.833491 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:06:33.833522 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:06:33.868896 46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
I0823 19:06:33.868933 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:06:33.905173 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:06:33.905205 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:06:33.924286 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:06:33.924315 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:06:33.935275 46108 logs.go:123] Gathering logs for container status ...
I0823 19:06:33.935301 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:06:33.961618 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:06:33.961646 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:06:36.531780 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:06:36.532804 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:06:36.532862 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:06:36.532920 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:06:36.556343 46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:06:36.556364 46108 cri.go:89] found id: ""
I0823 19:06:36.556370 46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
I0823 19:06:36.556418 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:36.560684 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:06:36.560749 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:06:36.581614 46108 cri.go:89] found id: "c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
I0823 19:06:36.581636 46108 cri.go:89] found id: ""
I0823 19:06:36.581644 46108 logs.go:284] 1 containers: [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922]
I0823 19:06:36.581693 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:36.586179 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:06:36.586264 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:06:36.602636 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:06:36.602668 46108 cri.go:89] found id: ""
I0823 19:06:36.602675 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:06:36.602736 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:36.607744 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:06:36.607810 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:06:36.623919 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:06:36.623942 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:06:36.623946 46108 cri.go:89] found id: ""
I0823 19:06:36.623952 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:06:36.624009 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:36.628395 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:36.633650 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:06:36.633709 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:06:36.654155 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:06:36.654181 46108 cri.go:89] found id: ""
I0823 19:06:36.654190 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:06:36.654240 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:36.658880 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:06:36.658946 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:06:36.678034 46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:06:36.678061 46108 cri.go:89] found id: ""
I0823 19:06:36.678067 46108 logs.go:284] 1 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12]
I0823 19:06:36.678126 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:36.683815 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:06:36.683902 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:06:36.702404 46108 cri.go:89] found id: ""
I0823 19:06:36.702425 46108 logs.go:284] 0 containers: []
W0823 19:06:36.702432 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:06:36.702438 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:06:36.702485 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:06:36.723009 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:06:36.723034 46108 cri.go:89] found id: ""
I0823 19:06:36.723043 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:06:36.723096 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:36.727531 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:06:36.727555 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:06:36.752161 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:06:36.752197 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:06:36.777024 46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
I0823 19:06:36.777052 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:06:36.823091 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:06:36.823122 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:06:36.847267 46108 logs.go:123] Gathering logs for container status ...
I0823 19:06:36.847294 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:06:36.878818 46108 logs.go:123] Gathering logs for etcd [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922] ...
I0823 19:06:36.878854 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
I0823 19:06:36.897474 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:06:36.897507 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:06:36.911710 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:06:36.911741 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:06:37.000069 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:06:37.000098 46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
I0823 19:06:37.000117 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:06:37.020933 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:06:37.020959 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:06:37.074200 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:06:37.074234 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:06:37.108333 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:06:37.108368 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:06:37.175592 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:06:37.175637 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:06:39.742320 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:06:39.742861 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:06:39.742909 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:06:39.742961 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:06:39.760356 46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:06:39.760376 46108 cri.go:89] found id: ""
I0823 19:06:39.760386 46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
I0823 19:06:39.760436 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:39.766261 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:06:39.766340 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:06:39.783568 46108 cri.go:89] found id: "c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
I0823 19:06:39.783590 46108 cri.go:89] found id: ""
I0823 19:06:39.783597 46108 logs.go:284] 1 containers: [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922]
I0823 19:06:39.783639 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:39.788058 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:06:39.788133 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:06:39.805009 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:06:39.805034 46108 cri.go:89] found id: ""
I0823 19:06:39.805043 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:06:39.805100 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:39.808986 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:06:39.809050 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:06:39.825844 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:06:39.825862 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:06:39.825866 46108 cri.go:89] found id: ""
I0823 19:06:39.825874 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:06:39.825928 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:39.830522 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:39.834781 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:06:39.834844 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:06:39.850941 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:06:39.850968 46108 cri.go:89] found id: ""
I0823 19:06:39.850976 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:06:39.851034 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:39.855218 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:06:39.855296 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:06:39.871059 46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:06:39.871079 46108 cri.go:89] found id: ""
I0823 19:06:39.871085 46108 logs.go:284] 1 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12]
I0823 19:06:39.871134 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:39.875001 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:06:39.875072 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:06:39.890351 46108 cri.go:89] found id: ""
I0823 19:06:39.890376 46108 logs.go:284] 0 containers: []
W0823 19:06:39.890383 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:06:39.890388 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:06:39.890444 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:06:39.906428 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:06:39.906449 46108 cri.go:89] found id: ""
I0823 19:06:39.906456 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:06:39.906497 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:39.910526 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:06:39.910551 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:06:39.998329 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:06:39.998355 46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
I0823 19:06:39.998376 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:06:40.024566 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:06:40.024594 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:06:40.051364 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:06:40.051397 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:06:40.068764 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:06:40.068788 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:06:40.108132 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:06:40.108167 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:06:40.142888 46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
I0823 19:06:40.142920 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:06:40.171984 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:06:40.172015 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:06:40.239620 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:06:40.239659 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:06:40.301043 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:06:40.301076 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:06:40.311860 46108 logs.go:123] Gathering logs for etcd [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922] ...
I0823 19:06:40.311885 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
I0823 19:06:40.327757 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:06:40.327786 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:06:40.353339 46108 logs.go:123] Gathering logs for container status ...
I0823 19:06:40.353370 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:06:42.876759 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:06:42.877471 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:06:42.877530 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:06:42.877607 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:06:42.894908 46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:06:42.894929 46108 cri.go:89] found id: ""
I0823 19:06:42.894936 46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
I0823 19:06:42.894981 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:42.898972 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:06:42.899033 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:06:42.915001 46108 cri.go:89] found id: "c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
I0823 19:06:42.915022 46108 cri.go:89] found id: ""
I0823 19:06:42.915031 46108 logs.go:284] 1 containers: [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922]
I0823 19:06:42.915101 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:42.919198 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:06:42.919256 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:06:42.935338 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:06:42.935361 46108 cri.go:89] found id: ""
I0823 19:06:42.935370 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:06:42.935423 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:42.939486 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:06:42.939548 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:06:42.956010 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:06:42.956034 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:06:42.956040 46108 cri.go:89] found id: ""
I0823 19:06:42.956048 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:06:42.956106 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:42.960464 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:42.964439 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:06:42.964493 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:06:42.982758 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:06:42.982782 46108 cri.go:89] found id: ""
I0823 19:06:42.982791 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:06:42.982875 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:42.986919 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:06:42.986983 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:06:43.003491 46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:06:43.003506 46108 cri.go:89] found id: ""
I0823 19:06:43.003513 46108 logs.go:284] 1 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12]
I0823 19:06:43.003554 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:43.007437 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:06:43.007488 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:06:43.025732 46108 cri.go:89] found id: ""
I0823 19:06:43.025761 46108 logs.go:284] 0 containers: []
W0823 19:06:43.025767 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:06:43.025775 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:06:43.025836 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:06:43.043934 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:06:43.043962 46108 cri.go:89] found id: ""
I0823 19:06:43.043971 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:06:43.044028 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:43.048415 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:06:43.048439 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:06:43.105880 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:06:43.105917 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:06:43.116950 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:06:43.116979 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:06:43.138259 46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
I0823 19:06:43.138287 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:06:43.168099 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:06:43.168132 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:06:43.235486 46108 logs.go:123] Gathering logs for container status ...
I0823 19:06:43.235522 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:06:43.258649 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:06:43.258689 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:06:43.338039 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:06:43.338062 46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
I0823 19:06:43.338077 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:06:43.358272 46108 logs.go:123] Gathering logs for etcd [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922] ...
I0823 19:06:43.358306 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
I0823 19:06:43.374342 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:06:43.374371 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:06:43.413191 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:06:43.413223 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:06:43.442937 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:06:43.442966 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:06:43.476287 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:06:43.476319 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:06:45.994498 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:06:45.995134 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:06:45.995194 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:06:45.995255 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:06:46.014234 46108 cri.go:89] found id: "3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:06:46.014255 46108 cri.go:89] found id: ""
I0823 19:06:46.014262 46108 logs.go:284] 1 containers: [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8]
I0823 19:06:46.014311 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:46.019587 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:06:46.019650 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:06:46.037930 46108 cri.go:89] found id: "c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
I0823 19:06:46.037954 46108 cri.go:89] found id: ""
I0823 19:06:46.037962 46108 logs.go:284] 1 containers: [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922]
I0823 19:06:46.038018 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:46.041902 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:06:46.041977 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:06:46.060288 46108 cri.go:89] found id: "3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:06:46.060307 46108 cri.go:89] found id: ""
I0823 19:06:46.060314 46108 logs.go:284] 1 containers: [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959]
I0823 19:06:46.060359 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:46.064538 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:06:46.064606 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:06:46.082325 46108 cri.go:89] found id: "1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:06:46.082353 46108 cri.go:89] found id: "abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:06:46.082361 46108 cri.go:89] found id: ""
I0823 19:06:46.082369 46108 logs.go:284] 2 containers: [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e]
I0823 19:06:46.082431 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:46.086528 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:46.090457 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:06:46.090530 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:06:46.109668 46108 cri.go:89] found id: "3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:06:46.109696 46108 cri.go:89] found id: ""
I0823 19:06:46.109705 46108 logs.go:284] 1 containers: [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832]
I0823 19:06:46.109758 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:46.115864 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:06:46.115919 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:06:46.132599 46108 cri.go:89] found id: "02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:06:46.132623 46108 cri.go:89] found id: ""
I0823 19:06:46.132633 46108 logs.go:284] 1 containers: [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12]
I0823 19:06:46.132689 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:46.137253 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:06:46.137312 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:06:46.157374 46108 cri.go:89] found id: ""
I0823 19:06:46.157397 46108 logs.go:284] 0 containers: []
W0823 19:06:46.157406 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:06:46.157412 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:06:46.157465 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:06:46.177625 46108 cri.go:89] found id: "587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:06:46.177647 46108 cri.go:89] found id: ""
I0823 19:06:46.177656 46108 logs.go:284] 1 containers: [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f]
I0823 19:06:46.177721 46108 ssh_runner.go:195] Run: which crictl
I0823 19:06:46.182247 46108 logs.go:123] Gathering logs for kube-apiserver [3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8] ...
I0823 19:06:46.182277 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3c9c336c61dd2e0de15d439bc0c3811a85bbd0a3a86776164cf6714c391e80c8"
I0823 19:06:46.206667 46108 logs.go:123] Gathering logs for etcd [c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922] ...
I0823 19:06:46.206705 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 c46f7ac532b92810f6d1cb407b25b1f6f90f49aacba502d6be632df6e5878922"
I0823 19:06:46.225287 46108 logs.go:123] Gathering logs for coredns [3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959] ...
I0823 19:06:46.225318 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3f8592f94557ce5ee2a15a7a13a1f09544b57480dfa784bfeb6bf973902db959"
I0823 19:06:46.263800 46108 logs.go:123] Gathering logs for kube-scheduler [abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e] ...
I0823 19:06:46.263831 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 abc416c2d8911c4f892c5e67840f39dde87c37a83a44e6504b9a8aec112e961e"
I0823 19:06:46.290163 46108 logs.go:123] Gathering logs for kube-proxy [3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832] ...
I0823 19:06:46.290206 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 3e39c5a54ee088545aeca064393f3c73566df78ca98c9ff4f7796eddb2e5c832"
I0823 19:06:46.327622 46108 logs.go:123] Gathering logs for kube-controller-manager [02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12] ...
I0823 19:06:46.327658 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 02ce71cc6f4560be18cece466f4f972b5ec3a34440342c05791798a4bf0e1b12"
I0823 19:06:46.363651 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:06:46.363686 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:06:46.439243 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:06:46.439277 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:06:46.539662 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0823 19:06:46.539689 46108 logs.go:123] Gathering logs for storage-provisioner [587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f] ...
I0823 19:06:46.539705 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 587c96874a093ad547a7ad4f16086da7f6abbefab46d4c542897974138168f2f"
I0823 19:06:46.562748 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:06:46.562775 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:06:46.639069 46108 logs.go:123] Gathering logs for container status ...
I0823 19:06:46.639112 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:06:46.665438 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:06:46.665469 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:06:46.677472 46108 logs.go:123] Gathering logs for kube-scheduler [1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc] ...
I0823 19:06:46.677503 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 1f4fd563d32738d55e72a475dc88ea945e6eb93d1989e0b02488fd5e3ccb7bfc"
I0823 19:06:49.233421 46108 api_server.go:253] Checking apiserver healthz at https://192.168.61.47:8443/healthz ...
I0823 19:06:49.234058 46108 api_server.go:269] stopped: https://192.168.61.47:8443/healthz: Get "https://192.168.61.47:8443/healthz": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:06:49.234109 46108 kubeadm.go:640] restartCluster took 4m22.17753923s
W0823 19:06:49.234163 46108 out.go:239] ! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: context deadline exceeded
! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: context deadline exceeded
I0823 19:06:49.234189 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0823 19:06:50.979304 46108 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.745093239s)
I0823 19:06:50.979376 46108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0823 19:06:50.992214 46108 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0823 19:06:51.000230 46108 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0823 19:06:51.010860 46108 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0823 19:06:51.010919 46108 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0823 19:06:51.106704 46108 kubeadm.go:322] [init] Using Kubernetes version: v1.21.2
I0823 19:06:51.106752 46108 kubeadm.go:322] [preflight] Running pre-flight checks
I0823 19:06:51.288628 46108 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0823 19:06:51.288761 46108 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0823 19:06:51.288882 46108 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0823 19:06:51.381515 46108 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0823 19:06:51.383380 46108 out.go:204] - Generating certificates and keys ...
I0823 19:06:51.383503 46108 kubeadm.go:322] [certs] Using existing ca certificate authority
I0823 19:06:51.383583 46108 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0823 19:06:51.383680 46108 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0823 19:06:51.383753 46108 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
I0823 19:06:51.384021 46108 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
I0823 19:06:51.384174 46108 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
I0823 19:06:51.384732 46108 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
I0823 19:06:51.385124 46108 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
I0823 19:06:51.385324 46108 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0823 19:06:51.385710 46108 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0823 19:06:51.385869 46108 kubeadm.go:322] [certs] Using the existing "sa" key
I0823 19:06:51.385941 46108 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0823 19:06:51.789906 46108 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0823 19:06:52.240307 46108 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0823 19:06:52.844096 46108 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0823 19:06:53.069388 46108 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0823 19:06:53.085803 46108 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0823 19:06:53.087761 46108 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0823 19:06:53.088099 46108 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0823 19:06:53.265055 46108 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0823 19:06:53.266894 46108 out.go:204] - Booting up control plane ...
I0823 19:06:53.267033 46108 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0823 19:06:53.271650 46108 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0823 19:06:53.272560 46108 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0823 19:06:53.273352 46108 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0823 19:06:53.275598 46108 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0823 19:07:33.277021 46108 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0823 19:10:53.280081 46108 kubeadm.go:322]
I0823 19:10:53.280158 46108 kubeadm.go:322] Unfortunately, an error has occurred:
I0823 19:10:53.280201 46108 kubeadm.go:322] timed out waiting for the condition
I0823 19:10:53.280218 46108 kubeadm.go:322]
I0823 19:10:53.280259 46108 kubeadm.go:322] This error is likely caused by:
I0823 19:10:53.280297 46108 kubeadm.go:322] - The kubelet is not running
I0823 19:10:53.280405 46108 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0823 19:10:53.280416 46108 kubeadm.go:322]
I0823 19:10:53.280541 46108 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0823 19:10:53.280588 46108 kubeadm.go:322] - 'systemctl status kubelet'
I0823 19:10:53.280646 46108 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0823 19:10:53.280669 46108 kubeadm.go:322]
I0823 19:10:53.280819 46108 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0823 19:10:53.280945 46108 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0823 19:10:53.280956 46108 kubeadm.go:322]
I0823 19:10:53.281054 46108 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
I0823 19:10:53.281174 46108 kubeadm.go:322] - 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
I0823 19:10:53.281283 46108 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0823 19:10:53.281405 46108 kubeadm.go:322] - 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
I0823 19:10:53.281415 46108 kubeadm.go:322]
I0823 19:10:53.282388 46108 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0823 19:10:53.282504 46108 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0823 19:10:53.282642 46108 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
W0823 19:10:53.282711 46108 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.21.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
stderr:
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.21.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
stderr:
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I0823 19:10:53.282768 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0823 19:10:54.260337 46108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0823 19:10:54.271143 46108 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0823 19:10:54.280356 46108 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0823 19:10:54.280398 46108 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0823 19:10:54.367150 46108 kubeadm.go:322] [init] Using Kubernetes version: v1.21.2
I0823 19:10:54.367267 46108 kubeadm.go:322] [preflight] Running pre-flight checks
I0823 19:10:54.516397 46108 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0823 19:10:54.516522 46108 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0823 19:10:54.516630 46108 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0823 19:10:54.605518 46108 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0823 19:10:54.607189 46108 out.go:204] - Generating certificates and keys ...
I0823 19:10:54.607326 46108 kubeadm.go:322] [certs] Using existing ca certificate authority
I0823 19:10:54.607436 46108 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0823 19:10:54.609419 46108 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0823 19:10:54.609531 46108 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
I0823 19:10:54.609663 46108 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
I0823 19:10:54.609759 46108 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
I0823 19:10:54.609851 46108 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
I0823 19:10:54.609940 46108 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
I0823 19:10:54.610052 46108 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0823 19:10:54.610162 46108 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0823 19:10:54.610207 46108 kubeadm.go:322] [certs] Using the existing "sa" key
I0823 19:10:54.610294 46108 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0823 19:10:54.824778 46108 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0823 19:10:54.960319 46108 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0823 19:10:55.064971 46108 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0823 19:10:55.389165 46108 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0823 19:10:55.407543 46108 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0823 19:10:55.409088 46108 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0823 19:10:55.409283 46108 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0823 19:10:55.561726 46108 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0823 19:10:55.563621 46108 out.go:204] - Booting up control plane ...
I0823 19:10:55.563738 46108 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0823 19:10:55.572947 46108 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0823 19:10:55.577429 46108 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0823 19:10:55.580595 46108 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0823 19:10:55.585959 46108 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0823 19:11:35.586874 46108 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0823 19:14:55.590709 46108 kubeadm.go:322]
I0823 19:14:55.590790 46108 kubeadm.go:322] Unfortunately, an error has occurred:
I0823 19:14:55.590864 46108 kubeadm.go:322] timed out waiting for the condition
I0823 19:14:55.590894 46108 kubeadm.go:322]
I0823 19:14:55.590939 46108 kubeadm.go:322] This error is likely caused by:
I0823 19:14:55.590982 46108 kubeadm.go:322] - The kubelet is not running
I0823 19:14:55.591069 46108 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0823 19:14:55.591075 46108 kubeadm.go:322]
I0823 19:14:55.591160 46108 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0823 19:14:55.591187 46108 kubeadm.go:322] - 'systemctl status kubelet'
I0823 19:14:55.591213 46108 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0823 19:14:55.591217 46108 kubeadm.go:322]
I0823 19:14:55.591325 46108 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0823 19:14:55.591392 46108 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0823 19:14:55.591397 46108 kubeadm.go:322]
I0823 19:14:55.591479 46108 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
I0823 19:14:55.591556 46108 kubeadm.go:322] - 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
I0823 19:14:55.591619 46108 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0823 19:14:55.591683 46108 kubeadm.go:322] - 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
I0823 19:14:55.591687 46108 kubeadm.go:322]
I0823 19:14:55.593060 46108 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0823 19:14:55.593189 46108 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0823 19:14:55.593273 46108 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
I0823 19:14:55.593330 46108 kubeadm.go:406] StartCluster complete in 12m28.59741012s
I0823 19:14:55.593365 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:14:55.593412 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:14:55.617288 46108 cri.go:89] found id: "10baea8ae55ecaf4251c471c6323d6d0a56070eaa09a1bec3df1f0ee1b5cf623"
I0823 19:14:55.617321 46108 cri.go:89] found id: ""
I0823 19:14:55.617329 46108 logs.go:284] 1 containers: [10baea8ae55ecaf4251c471c6323d6d0a56070eaa09a1bec3df1f0ee1b5cf623]
I0823 19:14:55.617385 46108 ssh_runner.go:195] Run: which crictl
I0823 19:14:55.621825 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:14:55.621912 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:14:55.641056 46108 cri.go:89] found id: "ed5e28de76164368d3188481f282306d5490810453726a7c281e40af96dda00f"
I0823 19:14:55.641081 46108 cri.go:89] found id: ""
I0823 19:14:55.641090 46108 logs.go:284] 1 containers: [ed5e28de76164368d3188481f282306d5490810453726a7c281e40af96dda00f]
I0823 19:14:55.641145 46108 ssh_runner.go:195] Run: which crictl
I0823 19:14:55.645786 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:14:55.645856 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:14:55.662999 46108 cri.go:89] found id: ""
I0823 19:14:55.663026 46108 logs.go:284] 0 containers: []
W0823 19:14:55.663036 46108 logs.go:286] No container was found matching "coredns"
I0823 19:14:55.663044 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:14:55.663103 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:14:55.679379 46108 cri.go:89] found id: "806087a328e881c00d6b1547a3a48cc274208736163fbd133fbf2a33636494ba"
I0823 19:14:55.679404 46108 cri.go:89] found id: ""
I0823 19:14:55.679413 46108 logs.go:284] 1 containers: [806087a328e881c00d6b1547a3a48cc274208736163fbd133fbf2a33636494ba]
I0823 19:14:55.679469 46108 ssh_runner.go:195] Run: which crictl
I0823 19:14:55.683405 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:14:55.683466 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:14:55.701440 46108 cri.go:89] found id: ""
I0823 19:14:55.701463 46108 logs.go:284] 0 containers: []
W0823 19:14:55.701472 46108 logs.go:286] No container was found matching "kube-proxy"
I0823 19:14:55.701480 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:14:55.701555 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:14:55.719282 46108 cri.go:89] found id: "57269c19a146e3d058e65d520f4bc7dccaa9a91f02f65220fbad10be7e4b2b98"
I0823 19:14:55.719315 46108 cri.go:89] found id: ""
I0823 19:14:55.719323 46108 logs.go:284] 1 containers: [57269c19a146e3d058e65d520f4bc7dccaa9a91f02f65220fbad10be7e4b2b98]
I0823 19:14:55.719380 46108 ssh_runner.go:195] Run: which crictl
I0823 19:14:55.723402 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:14:55.723471 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:14:55.740370 46108 cri.go:89] found id: ""
I0823 19:14:55.740394 46108 logs.go:284] 0 containers: []
W0823 19:14:55.740403 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:14:55.740409 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:14:55.740475 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:14:55.756481 46108 cri.go:89] found id: ""
I0823 19:14:55.756511 46108 logs.go:284] 0 containers: []
W0823 19:14:55.756520 46108 logs.go:286] No container was found matching "storage-provisioner"
I0823 19:14:55.756538 46108 logs.go:123] Gathering logs for kube-scheduler [806087a328e881c00d6b1547a3a48cc274208736163fbd133fbf2a33636494ba] ...
I0823 19:14:55.756552 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 806087a328e881c00d6b1547a3a48cc274208736163fbd133fbf2a33636494ba"
I0823 19:14:55.829722 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:14:55.829759 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:14:55.892510 46108 logs.go:123] Gathering logs for container status ...
I0823 19:14:55.892547 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:14:55.918032 46108 logs.go:123] Gathering logs for kube-apiserver [10baea8ae55ecaf4251c471c6323d6d0a56070eaa09a1bec3df1f0ee1b5cf623] ...
I0823 19:14:55.918075 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 10baea8ae55ecaf4251c471c6323d6d0a56070eaa09a1bec3df1f0ee1b5cf623"
I0823 19:14:55.947621 46108 logs.go:123] Gathering logs for etcd [ed5e28de76164368d3188481f282306d5490810453726a7c281e40af96dda00f] ...
I0823 19:14:55.947654 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ed5e28de76164368d3188481f282306d5490810453726a7c281e40af96dda00f"
I0823 19:14:55.966771 46108 logs.go:123] Gathering logs for kube-controller-manager [57269c19a146e3d058e65d520f4bc7dccaa9a91f02f65220fbad10be7e4b2b98] ...
I0823 19:14:55.966813 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 57269c19a146e3d058e65d520f4bc7dccaa9a91f02f65220fbad10be7e4b2b98"
I0823 19:14:56.012530 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:14:56.012566 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:14:56.077734 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:14:56.077769 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:14:56.090478 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:14:56.090510 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:14:56.204896 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
W0823 19:14:56.204953 46108 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.21.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
stderr:
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0823 19:14:56.204988 46108 out.go:239] *
*
W0823 19:14:56.205061 46108 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.21.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
stderr:
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.21.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
stderr:
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0823 19:14:56.205089 46108 out.go:239] *
*
W0823 19:14:56.205977 46108 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0823 19:14:56.209130 46108 out.go:177]
W0823 19:14:56.210519 46108 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.21.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
stderr:
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.21.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
stderr:
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0823 19:14:56.210560 46108 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0823 19:14:56.210585 46108 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I0823 19:14:56.212168 46108 out.go:177]
** /stderr **
version_upgrade_test.go:144: upgrade from v1.22.0 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-502460 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 --container-runtime=containerd: exit status 109
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-08-23 19:14:56.729867823 +0000 UTC m=+3739.326779234
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-502460 -n running-upgrade-502460
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-502460 -n running-upgrade-502460: exit status 2 (228.930941ms)
-- stdout --
Running
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p running-upgrade-502460 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs:
-- stdout --
*
* ==> Audit <==
* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
| ssh | -p bridge-573325 sudo | bridge-573325 | jenkins | v1.31.2 | 23 Aug 23 19:09 UTC | 23 Aug 23 19:09 UTC |
| | systemctl status containerd | | | | | |
| | --all --full --no-pager | | | | | |
| ssh | -p bridge-573325 sudo | bridge-573325 | jenkins | v1.31.2 | 23 Aug 23 19:09 UTC | 23 Aug 23 19:09 UTC |
| | systemctl cat containerd | | | | | |
| | --no-pager | | | | | |
| ssh | -p bridge-573325 sudo cat | bridge-573325 | jenkins | v1.31.2 | 23 Aug 23 19:09 UTC | 23 Aug 23 19:09 UTC |
| | /lib/systemd/system/containerd.service | | | | | |
| ssh | -p bridge-573325 sudo cat | bridge-573325 | jenkins | v1.31.2 | 23 Aug 23 19:09 UTC | 23 Aug 23 19:09 UTC |
| | /etc/containerd/config.toml | | | | | |
| ssh | -p bridge-573325 sudo | bridge-573325 | jenkins | v1.31.2 | 23 Aug 23 19:09 UTC | 23 Aug 23 19:09 UTC |
| | containerd config dump | | | | | |
| ssh | -p bridge-573325 sudo | bridge-573325 | jenkins | v1.31.2 | 23 Aug 23 19:09 UTC | |
| | systemctl status crio --all | | | | | |
| | --full --no-pager | | | | | |
| ssh | -p bridge-573325 sudo | bridge-573325 | jenkins | v1.31.2 | 23 Aug 23 19:09 UTC | 23 Aug 23 19:09 UTC |
| | systemctl cat crio --no-pager | | | | | |
| ssh | -p bridge-573325 sudo find | bridge-573325 | jenkins | v1.31.2 | 23 Aug 23 19:09 UTC | 23 Aug 23 19:09 UTC |
| | /etc/crio -type f -exec sh -c | | | | | |
| | 'echo {}; cat {}' \; | | | | | |
| ssh | -p bridge-573325 sudo crio | bridge-573325 | jenkins | v1.31.2 | 23 Aug 23 19:09 UTC | 23 Aug 23 19:09 UTC |
| | config | | | | | |
| delete | -p bridge-573325 | bridge-573325 | jenkins | v1.31.2 | 23 Aug 23 19:09 UTC | 23 Aug 23 19:09 UTC |
| start | -p no-preload-301101 | no-preload-301101 | jenkins | v1.31.2 | 23 Aug 23 19:09 UTC | 23 Aug 23 19:11 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --preload=false --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.28.0 | | | | | |
| addons | enable metrics-server -p old-k8s-version-355473 | old-k8s-version-355473 | jenkins | v1.31.2 | 23 Aug 23 19:09 UTC | 23 Aug 23 19:09 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p old-k8s-version-355473 | old-k8s-version-355473 | jenkins | v1.31.2 | 23 Aug 23 19:09 UTC | 23 Aug 23 19:11 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable metrics-server -p no-preload-301101 | no-preload-301101 | jenkins | v1.31.2 | 23 Aug 23 19:11 UTC | 23 Aug 23 19:11 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p no-preload-301101 | no-preload-301101 | jenkins | v1.31.2 | 23 Aug 23 19:11 UTC | 23 Aug 23 19:12 UTC |
| | --alsologtostderr -v=3 | | | | | |
| delete | -p stopped-upgrade-228249 | stopped-upgrade-228249 | jenkins | v1.31.2 | 23 Aug 23 19:11 UTC | 23 Aug 23 19:11 UTC |
| start | -p | default-k8s-diff-port-319240 | jenkins | v1.31.2 | 23 Aug 23 19:11 UTC | 23 Aug 23 19:13 UTC |
| | default-k8s-diff-port-319240 | | | | | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --apiserver-port=8444 | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.28.0 | | | | | |
| addons | enable dashboard -p old-k8s-version-355473 | old-k8s-version-355473 | jenkins | v1.31.2 | 23 Aug 23 19:11 UTC | 23 Aug 23 19:11 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p old-k8s-version-355473 | old-k8s-version-355473 | jenkins | v1.31.2 | 23 Aug 23 19:11 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.16.0 | | | | | |
| addons | enable dashboard -p no-preload-301101 | no-preload-301101 | jenkins | v1.31.2 | 23 Aug 23 19:12 UTC | 23 Aug 23 19:12 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p no-preload-301101 | no-preload-301101 | jenkins | v1.31.2 | 23 Aug 23 19:12 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --preload=false --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.28.0 | | | | | |
| addons | enable metrics-server -p default-k8s-diff-port-319240 | default-k8s-diff-port-319240 | jenkins | v1.31.2 | 23 Aug 23 19:13 UTC | 23 Aug 23 19:13 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p | default-k8s-diff-port-319240 | jenkins | v1.31.2 | 23 Aug 23 19:13 UTC | 23 Aug 23 19:14 UTC |
| | default-k8s-diff-port-319240 | | | | | |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p default-k8s-diff-port-319240 | default-k8s-diff-port-319240 | jenkins | v1.31.2 | 23 Aug 23 19:14 UTC | 23 Aug 23 19:14 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p | default-k8s-diff-port-319240 | jenkins | v1.31.2 | 23 Aug 23 19:14 UTC | |
| | default-k8s-diff-port-319240 | | | | | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --apiserver-port=8444 | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.28.0 | | | | | |
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/08/23 19:14:52
Running on machine: ubuntu-20-agent-5
Binary: Built with gc go1.20.7 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0823 19:14:52.632040 60591 out.go:296] Setting OutFile to fd 1 ...
I0823 19:14:52.632176 60591 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0823 19:14:52.632183 60591 out.go:309] Setting ErrFile to fd 2...
I0823 19:14:52.632187 60591 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0823 19:14:52.632367 60591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17086-11104/.minikube/bin
I0823 19:14:52.632907 60591 out.go:303] Setting JSON to false
I0823 19:14:52.634001 60591 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7037,"bootTime":1692811056,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0823 19:14:52.634062 60591 start.go:138] virtualization: kvm guest
I0823 19:14:52.636307 60591 out.go:177] * [default-k8s-diff-port-319240] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
I0823 19:14:52.637602 60591 out.go:177] - MINIKUBE_LOCATION=17086
I0823 19:14:52.638798 60591 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0823 19:14:52.637651 60591 notify.go:220] Checking for updates...
I0823 19:14:52.641021 60591 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/17086-11104/kubeconfig
I0823 19:14:52.642396 60591 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/17086-11104/.minikube
I0823 19:14:52.643647 60591 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0823 19:14:52.644931 60591 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0823 19:14:52.646528 60591 config.go:182] Loaded profile config "default-k8s-diff-port-319240": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
I0823 19:14:52.646970 60591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0823 19:14:52.647016 60591 main.go:141] libmachine: Launching plugin server for driver kvm2
I0823 19:14:52.662151 60591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45613
I0823 19:14:52.662569 60591 main.go:141] libmachine: () Calling .GetVersion
I0823 19:14:52.663120 60591 main.go:141] libmachine: Using API Version 1
I0823 19:14:52.663147 60591 main.go:141] libmachine: () Calling .SetConfigRaw
I0823 19:14:52.663556 60591 main.go:141] libmachine: () Calling .GetMachineName
I0823 19:14:52.663754 60591 main.go:141] libmachine: (default-k8s-diff-port-319240) Calling .DriverName
I0823 19:14:52.663995 60591 driver.go:373] Setting default libvirt URI to qemu:///system
I0823 19:14:52.664284 60591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0823 19:14:52.664312 60591 main.go:141] libmachine: Launching plugin server for driver kvm2
I0823 19:14:52.678128 60591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38727
I0823 19:14:52.678538 60591 main.go:141] libmachine: () Calling .GetVersion
I0823 19:14:52.678985 60591 main.go:141] libmachine: Using API Version 1
I0823 19:14:52.679007 60591 main.go:141] libmachine: () Calling .SetConfigRaw
I0823 19:14:52.679373 60591 main.go:141] libmachine: () Calling .GetMachineName
I0823 19:14:52.679565 60591 main.go:141] libmachine: (default-k8s-diff-port-319240) Calling .DriverName
I0823 19:14:52.714567 60591 out.go:177] * Using the kvm2 driver based on existing profile
I0823 19:14:52.715876 60591 start.go:298] selected driver: kvm2
I0823 19:14:52.715886 60591 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-319240 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.0 ClusterName:default-k8s-diff-port-319240 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.123 Port:8444 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Exp
osedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
I0823 19:14:52.715977 60591 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0823 19:14:52.716590 60591 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0823 19:14:52.716678 60591 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17086-11104/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0823 19:14:52.733023 60591 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.31.2
I0823 19:14:52.733418 60591 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0823 19:14:52.733453 60591 cni.go:84] Creating CNI manager for ""
I0823 19:14:52.733461 60591 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0823 19:14:52.733474 60591 start_flags.go:319] config:
{Name:default-k8s-diff-port-319240 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16971/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692613578-17086@sha256:95d17727509667642a071be45fb75a65d8a399e35185dd68dcf504256dec81ac Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:default-k8s-diff-port-319240 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.123 Port:8444 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
I0823 19:14:52.733650 60591 iso.go:125] acquiring lock: {Name:mk81cce7a5d7f5e981d80e681dab8a3ecaaface9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0823 19:14:52.736139 60591 out.go:177] * Starting control plane node default-k8s-diff-port-319240 in cluster default-k8s-diff-port-319240
I0823 19:14:52.737169 60591 preload.go:132] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I0823 19:14:52.737196 60591 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17086-11104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
I0823 19:14:52.737204 60591 cache.go:57] Caching tarball of preloaded images
I0823 19:14:52.737251 60591 preload.go:174] Found /home/jenkins/minikube-integration/17086-11104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0823 19:14:52.737262 60591 cache.go:60] Finished verifying existence of preloaded tar for v1.28.0 on containerd
I0823 19:14:52.737368 60591 profile.go:148] Saving config to /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/default-k8s-diff-port-319240/config.json ...
I0823 19:14:52.737564 60591 start.go:365] acquiring machines lock for default-k8s-diff-port-319240: {Name:mk1833667e1e194459e10edb6eaddedbcc5a0864 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0823 19:14:52.737606 60591 start.go:369] acquired machines lock for "default-k8s-diff-port-319240" in 22.707µs
I0823 19:14:52.737621 60591 start.go:96] Skipping create...Using existing machine configuration
I0823 19:14:52.737629 60591 fix.go:54] fixHost starting:
I0823 19:14:52.737879 60591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0823 19:14:52.737902 60591 main.go:141] libmachine: Launching plugin server for driver kvm2
I0823 19:14:52.752555 60591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33735
I0823 19:14:52.752952 60591 main.go:141] libmachine: () Calling .GetVersion
I0823 19:14:52.753431 60591 main.go:141] libmachine: Using API Version 1
I0823 19:14:52.753451 60591 main.go:141] libmachine: () Calling .SetConfigRaw
I0823 19:14:52.753775 60591 main.go:141] libmachine: () Calling .GetMachineName
I0823 19:14:52.753961 60591 main.go:141] libmachine: (default-k8s-diff-port-319240) Calling .DriverName
I0823 19:14:52.754122 60591 main.go:141] libmachine: (default-k8s-diff-port-319240) Calling .GetState
I0823 19:14:52.755783 60591 fix.go:102] recreateIfNeeded on default-k8s-diff-port-319240: state=Stopped err=<nil>
I0823 19:14:52.755808 60591 main.go:141] libmachine: (default-k8s-diff-port-319240) Calling .DriverName
W0823 19:14:52.755953 60591 fix.go:128] unexpected machine state, will restart: <nil>
I0823 19:14:52.757648 60591 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-319240" ...
I0823 19:14:55.590709 46108 kubeadm.go:322]
I0823 19:14:55.590790 46108 kubeadm.go:322] Unfortunately, an error has occurred:
I0823 19:14:55.590864 46108 kubeadm.go:322] timed out waiting for the condition
I0823 19:14:55.590894 46108 kubeadm.go:322]
I0823 19:14:55.590939 46108 kubeadm.go:322] This error is likely caused by:
I0823 19:14:55.590982 46108 kubeadm.go:322] - The kubelet is not running
I0823 19:14:55.591069 46108 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0823 19:14:55.591075 46108 kubeadm.go:322]
I0823 19:14:55.591160 46108 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0823 19:14:55.591187 46108 kubeadm.go:322] - 'systemctl status kubelet'
I0823 19:14:55.591213 46108 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0823 19:14:55.591217 46108 kubeadm.go:322]
I0823 19:14:55.591325 46108 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0823 19:14:55.591392 46108 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0823 19:14:55.591397 46108 kubeadm.go:322]
I0823 19:14:55.591479 46108 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
I0823 19:14:55.591556 46108 kubeadm.go:322] - 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
I0823 19:14:55.591619 46108 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0823 19:14:55.591683 46108 kubeadm.go:322] - 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
I0823 19:14:55.591687 46108 kubeadm.go:322]
I0823 19:14:55.593060 46108 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0823 19:14:55.593189 46108 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0823 19:14:55.593273 46108 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
I0823 19:14:55.593330 46108 kubeadm.go:406] StartCluster complete in 12m28.59741012s
I0823 19:14:55.593365 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0823 19:14:55.593412 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0823 19:14:55.617288 46108 cri.go:89] found id: "10baea8ae55ecaf4251c471c6323d6d0a56070eaa09a1bec3df1f0ee1b5cf623"
I0823 19:14:55.617321 46108 cri.go:89] found id: ""
I0823 19:14:55.617329 46108 logs.go:284] 1 containers: [10baea8ae55ecaf4251c471c6323d6d0a56070eaa09a1bec3df1f0ee1b5cf623]
I0823 19:14:55.617385 46108 ssh_runner.go:195] Run: which crictl
I0823 19:14:55.621825 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0823 19:14:55.621912 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0823 19:14:55.641056 46108 cri.go:89] found id: "ed5e28de76164368d3188481f282306d5490810453726a7c281e40af96dda00f"
I0823 19:14:55.641081 46108 cri.go:89] found id: ""
I0823 19:14:55.641090 46108 logs.go:284] 1 containers: [ed5e28de76164368d3188481f282306d5490810453726a7c281e40af96dda00f]
I0823 19:14:55.641145 46108 ssh_runner.go:195] Run: which crictl
I0823 19:14:55.645786 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0823 19:14:55.645856 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0823 19:14:55.662999 46108 cri.go:89] found id: ""
I0823 19:14:55.663026 46108 logs.go:284] 0 containers: []
W0823 19:14:55.663036 46108 logs.go:286] No container was found matching "coredns"
I0823 19:14:55.663044 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0823 19:14:55.663103 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0823 19:14:55.679379 46108 cri.go:89] found id: "806087a328e881c00d6b1547a3a48cc274208736163fbd133fbf2a33636494ba"
I0823 19:14:55.679404 46108 cri.go:89] found id: ""
I0823 19:14:55.679413 46108 logs.go:284] 1 containers: [806087a328e881c00d6b1547a3a48cc274208736163fbd133fbf2a33636494ba]
I0823 19:14:55.679469 46108 ssh_runner.go:195] Run: which crictl
I0823 19:14:55.683405 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0823 19:14:55.683466 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0823 19:14:55.701440 46108 cri.go:89] found id: ""
I0823 19:14:55.701463 46108 logs.go:284] 0 containers: []
W0823 19:14:55.701472 46108 logs.go:286] No container was found matching "kube-proxy"
I0823 19:14:55.701480 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0823 19:14:55.701555 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0823 19:14:55.719282 46108 cri.go:89] found id: "57269c19a146e3d058e65d520f4bc7dccaa9a91f02f65220fbad10be7e4b2b98"
I0823 19:14:55.719315 46108 cri.go:89] found id: ""
I0823 19:14:55.719323 46108 logs.go:284] 1 containers: [57269c19a146e3d058e65d520f4bc7dccaa9a91f02f65220fbad10be7e4b2b98]
I0823 19:14:55.719380 46108 ssh_runner.go:195] Run: which crictl
I0823 19:14:55.723402 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0823 19:14:55.723471 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0823 19:14:55.740370 46108 cri.go:89] found id: ""
I0823 19:14:55.740394 46108 logs.go:284] 0 containers: []
W0823 19:14:55.740403 46108 logs.go:286] No container was found matching "kindnet"
I0823 19:14:55.740409 46108 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0823 19:14:55.740475 46108 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0823 19:14:55.756481 46108 cri.go:89] found id: ""
I0823 19:14:55.756511 46108 logs.go:284] 0 containers: []
W0823 19:14:55.756520 46108 logs.go:286] No container was found matching "storage-provisioner"
I0823 19:14:55.756538 46108 logs.go:123] Gathering logs for kube-scheduler [806087a328e881c00d6b1547a3a48cc274208736163fbd133fbf2a33636494ba] ...
I0823 19:14:55.756552 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 806087a328e881c00d6b1547a3a48cc274208736163fbd133fbf2a33636494ba"
I0823 19:14:55.829722 46108 logs.go:123] Gathering logs for containerd ...
I0823 19:14:55.829759 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0823 19:14:55.892510 46108 logs.go:123] Gathering logs for container status ...
I0823 19:14:55.892547 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0823 19:14:55.918032 46108 logs.go:123] Gathering logs for kube-apiserver [10baea8ae55ecaf4251c471c6323d6d0a56070eaa09a1bec3df1f0ee1b5cf623] ...
I0823 19:14:55.918075 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 10baea8ae55ecaf4251c471c6323d6d0a56070eaa09a1bec3df1f0ee1b5cf623"
I0823 19:14:55.947621 46108 logs.go:123] Gathering logs for etcd [ed5e28de76164368d3188481f282306d5490810453726a7c281e40af96dda00f] ...
I0823 19:14:55.947654 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 ed5e28de76164368d3188481f282306d5490810453726a7c281e40af96dda00f"
I0823 19:14:55.966771 46108 logs.go:123] Gathering logs for kube-controller-manager [57269c19a146e3d058e65d520f4bc7dccaa9a91f02f65220fbad10be7e4b2b98] ...
I0823 19:14:55.966813 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /bin/crictl logs --tail 400 57269c19a146e3d058e65d520f4bc7dccaa9a91f02f65220fbad10be7e4b2b98"
I0823 19:14:56.012530 46108 logs.go:123] Gathering logs for kubelet ...
I0823 19:14:56.012566 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0823 19:14:56.077734 46108 logs.go:123] Gathering logs for dmesg ...
I0823 19:14:56.077769 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0823 19:14:56.090478 46108 logs.go:123] Gathering logs for describe nodes ...
I0823 19:14:56.090510 46108 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0823 19:14:56.204896 46108 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
W0823 19:14:56.204953 46108 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.21.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
stderr:
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0823 19:14:56.204988 46108 out.go:239] *
W0823 19:14:56.205061 46108 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.21.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
stderr:
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0823 19:14:56.205089 46108 out.go:239] *
W0823 19:14:56.205977 46108 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0823 19:14:56.209130 46108 out.go:177]
W0823 19:14:56.210519 46108 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.21.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.21.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
stderr:
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0823 19:14:56.210560 46108 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0823 19:14:56.210585 46108 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
I0823 19:14:56.212168 46108 out.go:177]
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
57269c19a146e ae24db9aa2cc0 49 seconds ago Exited kube-controller-manager 4 d43e217fe53b1
ed5e28de76164 0369cf4303ffd 57 seconds ago Exited etcd 5 928ccbda06851
10baea8ae55ec 106ff58d43082 57 seconds ago Exited kube-apiserver 4 abe62d19cd085
806087a328e88 f917b8c8f55b7 3 minutes ago Running kube-scheduler 0 a721825474a44
*
* ==> containerd <==
* -- Logs begin at Wed 2023-08-23 19:00:44 UTC, end at Wed 2023-08-23 19:14:57 UTC. --
Aug 23 19:13:59 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:13:59.908786689Z" level=error msg="Failed to pipe stderr of container \"ed5e28de76164368d3188481f282306d5490810453726a7c281e40af96dda00f\"" error="reading from a closed fifo"
Aug 23 19:13:59 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:13:59.909029808Z" level=info msg="Finish piping stderr of container \"ed5e28de76164368d3188481f282306d5490810453726a7c281e40af96dda00f\""
Aug 23 19:13:59 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:13:59.909358158Z" level=error msg="Failed to pipe stdout of container \"ed5e28de76164368d3188481f282306d5490810453726a7c281e40af96dda00f\"" error="reading from a closed fifo"
Aug 23 19:13:59 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:13:59.909476009Z" level=info msg="Finish piping stdout of container \"ed5e28de76164368d3188481f282306d5490810453726a7c281e40af96dda00f\""
Aug 23 19:13:59 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:13:59.913322355Z" level=error msg="StartContainer for \"ed5e28de76164368d3188481f282306d5490810453726a7c281e40af96dda00f\" failed" error="failed to create containerd task: OCI runtime create failed: container_linux.go:367: starting container process caused: exec: \"etcd\": executable file not found in $PATH: unknown"
Aug 23 19:14:00 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:00.197236999Z" level=info msg="RemoveContainer for \"900a6eb03e89e19798431089a59a61c435c2969be6edb17671baf9201161108e\""
Aug 23 19:14:00 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:00.204080067Z" level=info msg="RemoveContainer for \"900a6eb03e89e19798431089a59a61c435c2969be6edb17671baf9201161108e\" returns successfully"
Aug 23 19:14:07 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:07.676362551Z" level=info msg="CreateContainer within sandbox \"d43e217fe53b131454e7218bf8fa52be082b22c9ce7fad672a442a0ab705c1c0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:4,}"
Aug 23 19:14:07 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:07.713382594Z" level=info msg="CreateContainer within sandbox \"d43e217fe53b131454e7218bf8fa52be082b22c9ce7fad672a442a0ab705c1c0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:4,} returns container id \"57269c19a146e3d058e65d520f4bc7dccaa9a91f02f65220fbad10be7e4b2b98\""
Aug 23 19:14:07 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:07.714246107Z" level=info msg="StartContainer for \"57269c19a146e3d058e65d520f4bc7dccaa9a91f02f65220fbad10be7e4b2b98\""
Aug 23 19:14:07 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:07.861362831Z" level=info msg="StartContainer for \"57269c19a146e3d058e65d520f4bc7dccaa9a91f02f65220fbad10be7e4b2b98\" returns successfully"
Aug 23 19:14:20 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:20.333963832Z" level=info msg="Finish piping stderr of container \"10baea8ae55ecaf4251c471c6323d6d0a56070eaa09a1bec3df1f0ee1b5cf623\""
Aug 23 19:14:20 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:20.334085852Z" level=info msg="Finish piping stdout of container \"10baea8ae55ecaf4251c471c6323d6d0a56070eaa09a1bec3df1f0ee1b5cf623\""
Aug 23 19:14:20 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:20.336875833Z" level=info msg="TaskExit event &TaskExit{ContainerID:10baea8ae55ecaf4251c471c6323d6d0a56070eaa09a1bec3df1f0ee1b5cf623,ID:10baea8ae55ecaf4251c471c6323d6d0a56070eaa09a1bec3df1f0ee1b5cf623,Pid:13271,ExitStatus:1,ExitedAt:2023-08-23 19:14:20.336451071 +0000 UTC,XXX_unrecognized:[],}"
Aug 23 19:14:20 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:20.383863500Z" level=info msg="shim disconnected" id=10baea8ae55ecaf4251c471c6323d6d0a56070eaa09a1bec3df1f0ee1b5cf623
Aug 23 19:14:20 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:20.384105693Z" level=error msg="copy shim log" error="read /proc/self/fd/43: file already closed"
Aug 23 19:14:21 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:21.255581558Z" level=info msg="RemoveContainer for \"28ac3f3d7fcf94b7074a060c90afbf8b20f9c6c023f0cee413a83ebf592f0ca6\""
Aug 23 19:14:21 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:21.261020242Z" level=info msg="RemoveContainer for \"28ac3f3d7fcf94b7074a060c90afbf8b20f9c6c023f0cee413a83ebf592f0ca6\" returns successfully"
Aug 23 19:14:28 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:28.567624247Z" level=info msg="Finish piping stdout of container \"57269c19a146e3d058e65d520f4bc7dccaa9a91f02f65220fbad10be7e4b2b98\""
Aug 23 19:14:28 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:28.567878776Z" level=info msg="Finish piping stderr of container \"57269c19a146e3d058e65d520f4bc7dccaa9a91f02f65220fbad10be7e4b2b98\""
Aug 23 19:14:28 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:28.570055153Z" level=info msg="TaskExit event &TaskExit{ContainerID:57269c19a146e3d058e65d520f4bc7dccaa9a91f02f65220fbad10be7e4b2b98,ID:57269c19a146e3d058e65d520f4bc7dccaa9a91f02f65220fbad10be7e4b2b98,Pid:13322,ExitStatus:255,ExitedAt:2023-08-23 19:14:28.569462851 +0000 UTC,XXX_unrecognized:[],}"
Aug 23 19:14:28 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:28.614336221Z" level=info msg="shim disconnected" id=57269c19a146e3d058e65d520f4bc7dccaa9a91f02f65220fbad10be7e4b2b98
Aug 23 19:14:28 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:28.614437301Z" level=error msg="copy shim log" error="read /proc/self/fd/45: file already closed"
Aug 23 19:14:29 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:29.277098508Z" level=info msg="RemoveContainer for \"3c522063da00c32ff3d2e9d4c2597ed629acbce70133e030376a02d1a2374961\""
Aug 23 19:14:29 running-upgrade-502460 containerd[4123]: time="2023-08-23T19:14:29.282839472Z" level=info msg="RemoveContainer for \"3c522063da00c32ff3d2e9d4c2597ed629acbce70133e030376a02d1a2374961\" returns successfully"
*
* ==> describe nodes <==
*
* ==> dmesg <==
* [ +0.028930] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
[ +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +0.802046] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1722 comm=systemd-network
[ +1.084040] vboxguest: loading out-of-tree module taints kernel.
[ +0.004644] vboxguest: PCI device not found, probably running on physical hardware.
[ +2.090082] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[Aug23 19:01] systemd-fstab-generator[2102]: Ignoring "noauto" for root device
[ +0.131450] systemd-fstab-generator[2115]: Ignoring "noauto" for root device
[ +0.188266] systemd-fstab-generator[2145]: Ignoring "noauto" for root device
[ +34.364376] systemd-fstab-generator[2638]: Ignoring "noauto" for root device
[ +16.815080] systemd-fstab-generator[3054]: Ignoring "noauto" for root device
[Aug23 19:02] kauditd_printk_skb: 38 callbacks suppressed
[ +3.686858] systemd-fstab-generator[3631]: Ignoring "noauto" for root device
[ +0.255652] systemd-fstab-generator[3654]: Ignoring "noauto" for root device
[ +0.179236] systemd-fstab-generator[3677]: Ignoring "noauto" for root device
[ +0.369608] systemd-fstab-generator[3739]: Ignoring "noauto" for root device
[ +3.934059] kauditd_printk_skb: 71 callbacks suppressed
[ +4.689110] systemd-fstab-generator[4112]: Ignoring "noauto" for root device
[ +3.849395] kauditd_printk_skb: 14 callbacks suppressed
[ +12.243365] kauditd_printk_skb: 29 callbacks suppressed
[ +3.359217] systemd-fstab-generator[5312]: Ignoring "noauto" for root device
[ +11.066871] NFSD: Unable to end grace period: -110
[Aug23 19:06] kauditd_printk_skb: 5 callbacks suppressed
[ +3.086392] systemd-fstab-generator[11151]: Ignoring "noauto" for root device
[Aug23 19:10] systemd-fstab-generator[12390]: Ignoring "noauto" for root device
*
* ==> etcd [ed5e28de76164368d3188481f282306d5490810453726a7c281e40af96dda00f] <==
*
*
* ==> kernel <==
* 19:14:57 up 14 min, 0 users, load average: 0.13, 0.24, 0.22
Linux running-upgrade-502460 4.19.182 #1 SMP Fri Jul 2 00:45:17 UTC 2021 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2020.02.12"
*
* ==> kube-apiserver [10baea8ae55ecaf4251c471c6323d6d0a56070eaa09a1bec3df1f0ee1b5cf623] <==
* Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
I0823 19:13:59.963658 1 server.go:629] external host was not specified, using 192.168.61.47
I0823 19:13:59.964587 1 server.go:181] Version: v1.21.2
I0823 19:14:00.319353 1 shared_informer.go:240] Waiting for caches to sync for node_authorizer
I0823 19:14:00.320778 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0823 19:14:00.320881 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0823 19:14:00.322760 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0823 19:14:00.322966 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0823 19:14:00.326841 1 client.go:360] parsed scheme: "endpoint"
I0823 19:14:00.327270 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}]
W0823 19:14:00.327989 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0823 19:14:01.319552 1 client.go:360] parsed scheme: "endpoint"
I0823 19:14:01.319599 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}]
W0823 19:14:01.319908 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0823 19:14:01.329024 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0823 19:14:02.320757 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0823 19:14:03.061592 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0823 19:14:03.808670 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0823 19:14:06.021433 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0823 19:14:06.652921 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0823 19:14:10.772963 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0823 19:14:11.440753 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0823 19:14:17.323782 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0823 19:14:18.623507 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
Error: context deadline exceeded
*
* ==> kube-controller-manager [57269c19a146e3d058e65d520f4bc7dccaa9a91f02f65220fbad10be7e4b2b98] <==
* /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue/queue.go:151 +0x89
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).processNextWorkItem(0xc0008e6c80, 0x203000)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:263 +0x66
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).runWorker(...)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:258
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00067f710)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00067f710, 0x500bf00, 0xc000e43230, 0x4b25f01, 0xc0001000c0)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0x9b
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00067f710, 0x3b9aca00, 0x0, 0x1, 0xc0001000c0)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc00067f710, 0x3b9aca00, 0xc0001000c0)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).Run
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:247 +0x1d1
goroutine 146 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00067f720, 0x500bf00, 0xc000e43200, 0x4b25f01, 0xc0001000c0)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x118
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00067f720, 0xdf8475800, 0x0, 0x1, 0xc0001000c0)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc00067f720, 0xdf8475800, 0xc0001000c0)
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).Run
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/tlsconfig.go:250 +0x24b
*
* ==> kube-scheduler [806087a328e881c00d6b1547a3a48cc274208736163fbd133fbf2a33636494ba] <==
* E0823 19:13:47.830467 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.61.47:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
E0823 19:13:52.375525 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.61.47:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
E0823 19:13:55.121793 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.61.47:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
E0823 19:13:57.943329 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: Get "https://192.168.61.47:8443/apis/storage.k8s.io/v1beta1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
I0823 19:14:10.383895 1 trace.go:205] Trace[890790537]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (23-Aug-2023 19:14:00.381) (total time: 10002ms):
Trace[890790537]: [10.002060061s] [10.002060061s] END
E0823 19:14:10.383985 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.61.47:8443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
I0823 19:14:17.221520 1 trace.go:205] Trace[5207339]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (23-Aug-2023 19:14:07.220) (total time: 10001ms):
Trace[5207339]: [10.00138842s] [10.00138842s] END
E0823 19:14:17.221609 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.61.47:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": net/http: TLS handshake timeout
I0823 19:14:17.441603 1 trace.go:205] Trace[1349132384]: "Reflector ListAndWatch" name:k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 (23-Aug-2023 19:14:07.440) (total time: 10000ms):
Trace[1349132384]: [10.00087358s] [10.00087358s] END
E0823 19:14:17.441668 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.61.47:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": net/http: TLS handshake timeout
E0823 19:14:21.336063 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.61.47:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
E0823 19:14:21.336430 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.61.47:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
E0823 19:14:21.336564 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.61.47:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
E0823 19:14:21.337297 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.61.47:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
E0823 19:14:22.437512 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.61.47:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
E0823 19:14:24.641954 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.61.47:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
E0823 19:14:31.784488 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: Get "https://192.168.61.47:8443/apis/storage.k8s.io/v1beta1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
E0823 19:14:37.195691 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.61.47:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
E0823 19:14:40.376831 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.61.47:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
E0823 19:14:42.808330 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.61.47:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
E0823 19:14:49.257685 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.61.47:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
E0823 19:14:56.008003 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.61.47:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.61.47:8443: connect: connection refused
*
* ==> kubelet <==
* -- Logs begin at Wed 2023-08-23 19:00:44 UTC, end at Wed 2023-08-23 19:14:57 UTC. --
Aug 23 19:14:55 running-upgrade-502460 kubelet[12398]: E0823 19:14:55.323262 12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
Aug 23 19:14:55 running-upgrade-502460 kubelet[12398]: I0823 19:14:55.352025 12398 kubelet_node_status.go:71] "Attempting to register node" node="running-upgrade-502460"
Aug 23 19:14:55 running-upgrade-502460 kubelet[12398]: E0823 19:14:55.352714 12398 kubelet_node_status.go:93] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.47:8443: connect: connection refused" node="running-upgrade-502460"
Aug 23 19:14:55 running-upgrade-502460 kubelet[12398]: E0823 19:14:55.424103 12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
Aug 23 19:14:55 running-upgrade-502460 kubelet[12398]: E0823 19:14:55.524768 12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
Aug 23 19:14:55 running-upgrade-502460 kubelet[12398]: E0823 19:14:55.626037 12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
Aug 23 19:14:55 running-upgrade-502460 kubelet[12398]: E0823 19:14:55.726225 12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
Aug 23 19:14:55 running-upgrade-502460 kubelet[12398]: E0823 19:14:55.826812 12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
Aug 23 19:14:55 running-upgrade-502460 kubelet[12398]: E0823 19:14:55.927025 12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
Aug 23 19:14:56 running-upgrade-502460 kubelet[12398]: E0823 19:14:56.028036 12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
Aug 23 19:14:56 running-upgrade-502460 kubelet[12398]: E0823 19:14:56.128434 12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
Aug 23 19:14:56 running-upgrade-502460 kubelet[12398]: E0823 19:14:56.229326 12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
Aug 23 19:14:56 running-upgrade-502460 kubelet[12398]: E0823 19:14:56.329947 12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
Aug 23 19:14:56 running-upgrade-502460 kubelet[12398]: E0823 19:14:56.430884 12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
Aug 23 19:14:56 running-upgrade-502460 kubelet[12398]: E0823 19:14:56.531477 12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
Aug 23 19:14:56 running-upgrade-502460 kubelet[12398]: E0823 19:14:56.632533 12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
Aug 23 19:14:56 running-upgrade-502460 kubelet[12398]: E0823 19:14:56.733613 12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
Aug 23 19:14:56 running-upgrade-502460 kubelet[12398]: E0823 19:14:56.834531 12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
Aug 23 19:14:56 running-upgrade-502460 kubelet[12398]: E0823 19:14:56.936413 12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
Aug 23 19:14:57 running-upgrade-502460 kubelet[12398]: E0823 19:14:57.036967 12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
Aug 23 19:14:57 running-upgrade-502460 kubelet[12398]: E0823 19:14:57.137231 12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
Aug 23 19:14:57 running-upgrade-502460 kubelet[12398]: E0823 19:14:57.238384 12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
Aug 23 19:14:57 running-upgrade-502460 kubelet[12398]: E0823 19:14:57.338562 12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
Aug 23 19:14:57 running-upgrade-502460 kubelet[12398]: E0823 19:14:57.439225 12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
Aug 23 19:14:57 running-upgrade-502460 kubelet[12398]: E0823 19:14:57.539510 12398 kubelet.go:2291] "Error getting node" err="node \"running-upgrade-502460\" not found"
-- /stdout --
** stderr **
E0823 19:14:57.451370 60720 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
! unable to fetch logs for: describe nodes
** /stderr **
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p running-upgrade-502460 -n running-upgrade-502460
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p running-upgrade-502460 -n running-upgrade-502460: exit status 2 (227.908785ms)
-- stdout --
Stopped
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-502460" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-502460" profile ...
helpers_test.go:178: (dbg) Run: out/minikube-linux-amd64 delete -p running-upgrade-502460
E0823 19:14:58.449706 18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17086-11104/.minikube/profiles/custom-flannel-573325/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-502460: (1.44303674s)
--- FAIL: TestRunningBinaryUpgrade (909.54s)