=== RUN TestPreload
preload_test.go:44: (dbg) Run: out/minikube-linux-amd64 start -p test-preload-203208 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.24.4
E0307 18:45:08.839077 11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
E0307 18:45:25.776014 11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-203208 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.24.4: (2m2.314138505s)
preload_test.go:57: (dbg) Run: out/minikube-linux-amd64 ssh -p test-preload-203208 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-203208 -- sudo crictl pull gcr.io/k8s-minikube/busybox: (2.417849066s)
preload_test.go:63: (dbg) Run: out/minikube-linux-amd64 stop -p test-preload-203208
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-203208: (1m32.007722664s)
preload_test.go:71: (dbg) Run: out/minikube-linux-amd64 start -p test-preload-203208 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 --container-runtime=containerd
E0307 18:47:15.578245 11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/functional-244351/client.crt: no such file or directory
E0307 18:50:08.837664 11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
E0307 18:50:18.626162 11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/functional-244351/client.crt: no such file or directory
E0307 18:50:25.776671 11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
E0307 18:52:15.578647 11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/functional-244351/client.crt: no such file or directory
E0307 18:53:11.889139 11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
E0307 18:55:08.839268 11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
E0307 18:55:25.776671 11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
E0307 18:57:15.578878 11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/functional-244351/client.crt: no such file or directory
E0307 19:00:08.826062 11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
E0307 19:00:08.838264 11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/ingress-addon-legacy-857097/client.crt: no such file or directory
E0307 19:00:25.776761 11106 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/addons-628397/client.crt: no such file or directory
preload_test.go:71: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p test-preload-203208 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 --container-runtime=containerd: exit status 109 (13m36.2770409s)
-- stdout --
* [test-preload-203208] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=15985
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/15985-4052/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/15985-4052/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Kubernetes 1.26.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.2
* Using the kvm2 driver based on existing profile
* Starting control plane node test-preload-203208 in cluster test-preload-203208
* Downloading Kubernetes v1.24.4 preload ...
* Restarting existing kvm2 VM for "test-preload-203208" ...
* Preparing Kubernetes v1.24.4 on containerd 1.6.18 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Generating certificates and keys ...
- Booting up control plane ...
-- /stdout --
** stderr **
I0307 18:47:08.188999 26384 out.go:296] Setting OutFile to fd 1 ...
I0307 18:47:08.189163 26384 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0307 18:47:08.189221 26384 out.go:309] Setting ErrFile to fd 2...
I0307 18:47:08.189235 26384 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0307 18:47:08.189633 26384 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15985-4052/.minikube/bin
I0307 18:47:08.190229 26384 out.go:303] Setting JSON to false
I0307 18:47:08.191033 26384 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":5376,"bootTime":1678209452,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0307 18:47:08.191096 26384 start.go:135] virtualization: kvm guest
I0307 18:47:08.193540 26384 out.go:177] * [test-preload-203208] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
I0307 18:47:08.195219 26384 out.go:177] - MINIKUBE_LOCATION=15985
I0307 18:47:08.196770 26384 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0307 18:47:08.195178 26384 notify.go:220] Checking for updates...
I0307 18:47:08.198392 26384 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15985-4052/kubeconfig
I0307 18:47:08.199832 26384 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15985-4052/.minikube
I0307 18:47:08.201253 26384 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0307 18:47:08.202663 26384 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0307 18:47:08.204748 26384 config.go:182] Loaded profile config "test-preload-203208": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
I0307 18:47:08.205285 26384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0307 18:47:08.205342 26384 main.go:141] libmachine: Launching plugin server for driver kvm2
I0307 18:47:08.220069 26384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43611
I0307 18:47:08.220563 26384 main.go:141] libmachine: () Calling .GetVersion
I0307 18:47:08.221076 26384 main.go:141] libmachine: Using API Version 1
I0307 18:47:08.221096 26384 main.go:141] libmachine: () Calling .SetConfigRaw
I0307 18:47:08.221432 26384 main.go:141] libmachine: () Calling .GetMachineName
I0307 18:47:08.221584 26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
I0307 18:47:08.223753 26384 out.go:177] * Kubernetes 1.26.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.2
I0307 18:47:08.225235 26384 driver.go:365] Setting default libvirt URI to qemu:///system
I0307 18:47:08.225524 26384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0307 18:47:08.225572 26384 main.go:141] libmachine: Launching plugin server for driver kvm2
I0307 18:47:08.239705 26384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42799
I0307 18:47:08.240091 26384 main.go:141] libmachine: () Calling .GetVersion
I0307 18:47:08.240557 26384 main.go:141] libmachine: Using API Version 1
I0307 18:47:08.240573 26384 main.go:141] libmachine: () Calling .SetConfigRaw
I0307 18:47:08.240906 26384 main.go:141] libmachine: () Calling .GetMachineName
I0307 18:47:08.241120 26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
I0307 18:47:08.275331 26384 out.go:177] * Using the kvm2 driver based on existing profile
I0307 18:47:08.276690 26384 start.go:296] selected driver: kvm2
I0307 18:47:08.276702 26384 start.go:857] validating driver "kvm2" against &{Name:test-preload-203208 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-203208 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0307 18:47:08.276795 26384 start.go:868] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0307 18:47:08.277360 26384 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0307 18:47:08.277421 26384 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15985-4052/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0307 18:47:08.291366 26384 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.29.0
I0307 18:47:08.291664 26384 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0307 18:47:08.291694 26384 cni.go:84] Creating CNI manager for ""
I0307 18:47:08.291705 26384 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
I0307 18:47:08.291717 26384 start_flags.go:319] config:
{Name:test-preload-203208 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-203208 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0307 18:47:08.291838 26384 iso.go:125] acquiring lock: {Name:mkd51cb229a70df75d89beefefdcafed4c3dd9f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0307 18:47:08.293852 26384 out.go:177] * Starting control plane node test-preload-203208 in cluster test-preload-203208
I0307 18:47:08.296143 26384 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
I0307 18:47:08.450857 26384 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4
I0307 18:47:08.450906 26384 cache.go:57] Caching tarball of preloaded images
I0307 18:47:08.451048 26384 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
I0307 18:47:08.453213 26384 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
I0307 18:47:08.454642 26384 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
I0307 18:47:08.614514 26384 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4?checksum=md5:41d292e9d8b8bb8fdf3bc94dc3c43bf0 -> /home/jenkins/minikube-integration/15985-4052/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4
I0307 18:47:32.826448 26384 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
I0307 18:47:32.826536 26384 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15985-4052/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
I0307 18:47:33.690125 26384 cache.go:60] Finished verifying existence of preloaded tar for v1.24.4 on containerd
I0307 18:47:33.690264 26384 profile.go:148] Saving config to /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/config.json ...
I0307 18:47:33.690465 26384 cache.go:193] Successfully downloaded all kic artifacts
I0307 18:47:33.690499 26384 start.go:364] acquiring machines lock for test-preload-203208: {Name:mk86d1042b74b1a783c77f2a2445172eb6d30958 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0307 18:47:33.690551 26384 start.go:368] acquired machines lock for "test-preload-203208" in 35.693µs
I0307 18:47:33.690566 26384 start.go:96] Skipping create...Using existing machine configuration
I0307 18:47:33.690574 26384 fix.go:55] fixHost starting:
I0307 18:47:33.690832 26384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0307 18:47:33.690865 26384 main.go:141] libmachine: Launching plugin server for driver kvm2
I0307 18:47:33.704555 26384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37575
I0307 18:47:33.704995 26384 main.go:141] libmachine: () Calling .GetVersion
I0307 18:47:33.705526 26384 main.go:141] libmachine: Using API Version 1
I0307 18:47:33.705549 26384 main.go:141] libmachine: () Calling .SetConfigRaw
I0307 18:47:33.705815 26384 main.go:141] libmachine: () Calling .GetMachineName
I0307 18:47:33.706046 26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
I0307 18:47:33.706249 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetState
I0307 18:47:33.707747 26384 fix.go:103] recreateIfNeeded on test-preload-203208: state=Stopped err=<nil>
I0307 18:47:33.707767 26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
W0307 18:47:33.707933 26384 fix.go:129] unexpected machine state, will restart: <nil>
I0307 18:47:33.710555 26384 out.go:177] * Restarting existing kvm2 VM for "test-preload-203208" ...
I0307 18:47:33.712032 26384 main.go:141] libmachine: (test-preload-203208) Calling .Start
I0307 18:47:33.712220 26384 main.go:141] libmachine: (test-preload-203208) Ensuring networks are active...
I0307 18:47:33.712842 26384 main.go:141] libmachine: (test-preload-203208) Ensuring network default is active
I0307 18:47:33.713296 26384 main.go:141] libmachine: (test-preload-203208) Ensuring network mk-test-preload-203208 is active
I0307 18:47:33.713652 26384 main.go:141] libmachine: (test-preload-203208) Getting domain xml...
I0307 18:47:33.714346 26384 main.go:141] libmachine: (test-preload-203208) Creating domain...
I0307 18:47:34.910876 26384 main.go:141] libmachine: (test-preload-203208) Waiting to get IP...
I0307 18:47:34.911746 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:34.912163 26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
I0307 18:47:34.912255 26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:34.912165 26419 retry.go:31] will retry after 212.425256ms: waiting for machine to come up
I0307 18:47:35.126663 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:35.127105 26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
I0307 18:47:35.127129 26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:35.127053 26419 retry.go:31] will retry after 263.969499ms: waiting for machine to come up
I0307 18:47:35.392652 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:35.393060 26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
I0307 18:47:35.393084 26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:35.393015 26419 retry.go:31] will retry after 468.684911ms: waiting for machine to come up
I0307 18:47:35.863601 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:35.864010 26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
I0307 18:47:35.864033 26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:35.863947 26419 retry.go:31] will retry after 431.412452ms: waiting for machine to come up
I0307 18:47:36.296448 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:36.296882 26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
I0307 18:47:36.296912 26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:36.296828 26419 retry.go:31] will retry after 752.77311ms: waiting for machine to come up
I0307 18:47:37.050685 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:37.051090 26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
I0307 18:47:37.051119 26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:37.051041 26419 retry.go:31] will retry after 743.261623ms: waiting for machine to come up
I0307 18:47:37.795856 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:37.796272 26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
I0307 18:47:37.796308 26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:37.796215 26419 retry.go:31] will retry after 1.170690029s: waiting for machine to come up
I0307 18:47:38.968781 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:38.969233 26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
I0307 18:47:38.969258 26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:38.969184 26419 retry.go:31] will retry after 1.337094513s: waiting for machine to come up
I0307 18:47:40.308636 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:40.309023 26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
I0307 18:47:40.309045 26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:40.308986 26419 retry.go:31] will retry after 1.490851661s: waiting for machine to come up
I0307 18:47:41.801795 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:41.802239 26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
I0307 18:47:41.802269 26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:41.802176 26419 retry.go:31] will retry after 2.070649174s: waiting for machine to come up
I0307 18:47:43.874879 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:43.875349 26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
I0307 18:47:43.875380 26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:43.875281 26419 retry.go:31] will retry after 2.737681725s: waiting for machine to come up
I0307 18:47:46.616128 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:46.616688 26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
I0307 18:47:46.616712 26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:46.616637 26419 retry.go:31] will retry after 2.87929565s: waiting for machine to come up
I0307 18:47:49.497470 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:49.498002 26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
I0307 18:47:49.498030 26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:49.497932 26419 retry.go:31] will retry after 4.103227875s: waiting for machine to come up
I0307 18:47:53.606187 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:53.606663 26384 main.go:141] libmachine: (test-preload-203208) Found IP for machine: 192.168.39.212
I0307 18:47:53.606696 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has current primary IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:53.606703 26384 main.go:141] libmachine: (test-preload-203208) Reserving static IP address...
I0307 18:47:53.607103 26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "test-preload-203208", mac: "52:54:00:c5:37:98", ip: "192.168.39.212"} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
I0307 18:47:53.607138 26384 main.go:141] libmachine: (test-preload-203208) Reserved static IP address: 192.168.39.212
I0307 18:47:53.607159 26384 main.go:141] libmachine: (test-preload-203208) DBG | skip adding static IP to network mk-test-preload-203208 - found existing host DHCP lease matching {name: "test-preload-203208", mac: "52:54:00:c5:37:98", ip: "192.168.39.212"}
I0307 18:47:53.607180 26384 main.go:141] libmachine: (test-preload-203208) DBG | Getting to WaitForSSH function...
I0307 18:47:53.607195 26384 main.go:141] libmachine: (test-preload-203208) Waiting for SSH to be available...
I0307 18:47:53.609451 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:53.609920 26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
I0307 18:47:53.609952 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:53.610021 26384 main.go:141] libmachine: (test-preload-203208) DBG | Using SSH client type: external
I0307 18:47:53.610088 26384 main.go:141] libmachine: (test-preload-203208) DBG | Using SSH private key: /home/jenkins/minikube-integration/15985-4052/.minikube/machines/test-preload-203208/id_rsa (-rw-------)
I0307 18:47:53.610128 26384 main.go:141] libmachine: (test-preload-203208) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.212 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/15985-4052/.minikube/machines/test-preload-203208/id_rsa -p 22] /usr/bin/ssh <nil>}
I0307 18:47:53.610153 26384 main.go:141] libmachine: (test-preload-203208) DBG | About to run SSH command:
I0307 18:47:53.610166 26384 main.go:141] libmachine: (test-preload-203208) DBG | exit 0
I0307 18:47:53.693376 26384 main.go:141] libmachine: (test-preload-203208) DBG | SSH cmd err, output: <nil>:
I0307 18:47:53.693716 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetConfigRaw
I0307 18:47:53.694380 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetIP
I0307 18:47:53.696583 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:53.696983 26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
I0307 18:47:53.697018 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:53.697232 26384 profile.go:148] Saving config to /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/config.json ...
I0307 18:47:53.697422 26384 machine.go:88] provisioning docker machine ...
I0307 18:47:53.697443 26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
I0307 18:47:53.697627 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetMachineName
I0307 18:47:53.697782 26384 buildroot.go:166] provisioning hostname "test-preload-203208"
I0307 18:47:53.697798 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetMachineName
I0307 18:47:53.697947 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHHostname
I0307 18:47:53.699860 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:53.700195 26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
I0307 18:47:53.700225 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:53.700341 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHPort
I0307 18:47:53.700502 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
I0307 18:47:53.700619 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
I0307 18:47:53.700716 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHUsername
I0307 18:47:53.700853 26384 main.go:141] libmachine: Using SSH client type: native
I0307 18:47:53.701264 26384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil> [] 0s} 192.168.39.212 22 <nil> <nil>}
I0307 18:47:53.701276 26384 main.go:141] libmachine: About to run SSH command:
sudo hostname test-preload-203208 && echo "test-preload-203208" | sudo tee /etc/hostname
I0307 18:47:53.818077 26384 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-203208
I0307 18:47:53.818106 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHHostname
I0307 18:47:53.820950 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:53.821308 26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
I0307 18:47:53.821334 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:53.821486 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHPort
I0307 18:47:53.821689 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
I0307 18:47:53.821852 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
I0307 18:47:53.822005 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHUsername
I0307 18:47:53.822192 26384 main.go:141] libmachine: Using SSH client type: native
I0307 18:47:53.822574 26384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil> [] 0s} 192.168.39.212 22 <nil> <nil>}
I0307 18:47:53.822590 26384 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\stest-preload-203208' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-203208/g' /etc/hosts;
else
echo '127.0.1.1 test-preload-203208' | sudo tee -a /etc/hosts;
fi
fi
I0307 18:47:53.938498 26384 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0307 18:47:53.938531 26384 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15985-4052/.minikube CaCertPath:/home/jenkins/minikube-integration/15985-4052/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15985-4052/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15985-4052/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15985-4052/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15985-4052/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15985-4052/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15985-4052/.minikube}
I0307 18:47:53.938554 26384 buildroot.go:174] setting up certificates
I0307 18:47:53.938564 26384 provision.go:83] configureAuth start
I0307 18:47:53.938577 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetMachineName
I0307 18:47:53.938823 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetIP
I0307 18:47:53.941788 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:53.942174 26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
I0307 18:47:53.942193 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:53.942389 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHHostname
I0307 18:47:53.944344 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:53.944651 26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
I0307 18:47:53.944679 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:53.944819 26384 provision.go:138] copyHostCerts
I0307 18:47:53.944864 26384 exec_runner.go:144] found /home/jenkins/minikube-integration/15985-4052/.minikube/cert.pem, removing ...
I0307 18:47:53.944874 26384 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15985-4052/.minikube/cert.pem
I0307 18:47:53.944936 26384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15985-4052/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15985-4052/.minikube/cert.pem (1123 bytes)
I0307 18:47:53.945028 26384 exec_runner.go:144] found /home/jenkins/minikube-integration/15985-4052/.minikube/key.pem, removing ...
I0307 18:47:53.945042 26384 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15985-4052/.minikube/key.pem
I0307 18:47:53.945069 26384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15985-4052/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15985-4052/.minikube/key.pem (1679 bytes)
I0307 18:47:53.945118 26384 exec_runner.go:144] found /home/jenkins/minikube-integration/15985-4052/.minikube/ca.pem, removing ...
I0307 18:47:53.945125 26384 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15985-4052/.minikube/ca.pem
I0307 18:47:53.945144 26384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15985-4052/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15985-4052/.minikube/ca.pem (1078 bytes)
I0307 18:47:53.945185 26384 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15985-4052/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15985-4052/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15985-4052/.minikube/certs/ca-key.pem org=jenkins.test-preload-203208 san=[192.168.39.212 192.168.39.212 localhost 127.0.0.1 minikube test-preload-203208]
I0307 18:47:54.280078 26384 provision.go:172] copyRemoteCerts
I0307 18:47:54.280140 26384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0307 18:47:54.280162 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHHostname
I0307 18:47:54.282745 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:54.283051 26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
I0307 18:47:54.283081 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:54.283221 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHPort
I0307 18:47:54.283408 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
I0307 18:47:54.283548 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHUsername
I0307 18:47:54.283668 26384 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15985-4052/.minikube/machines/test-preload-203208/id_rsa Username:docker}
I0307 18:47:54.366577 26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0307 18:47:54.389837 26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I0307 18:47:54.411718 26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0307 18:47:54.433964 26384 provision.go:86] duration metric: configureAuth took 495.388641ms
I0307 18:47:54.433989 26384 buildroot.go:189] setting minikube options for container-runtime
I0307 18:47:54.434187 26384 config.go:182] Loaded profile config "test-preload-203208": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
I0307 18:47:54.434202 26384 machine.go:91] provisioned docker machine in 736.766542ms
I0307 18:47:54.434211 26384 start.go:300] post-start starting for "test-preload-203208" (driver="kvm2")
I0307 18:47:54.434220 26384 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0307 18:47:54.434345 26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
I0307 18:47:54.434642 26384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0307 18:47:54.434666 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHHostname
I0307 18:47:54.437421 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:54.437782 26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
I0307 18:47:54.437822 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:54.437973 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHPort
I0307 18:47:54.438168 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
I0307 18:47:54.438298 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHUsername
I0307 18:47:54.438399 26384 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15985-4052/.minikube/machines/test-preload-203208/id_rsa Username:docker}
I0307 18:47:54.518617 26384 ssh_runner.go:195] Run: cat /etc/os-release
I0307 18:47:54.522870 26384 info.go:137] Remote host: Buildroot 2021.02.12
I0307 18:47:54.522893 26384 filesync.go:126] Scanning /home/jenkins/minikube-integration/15985-4052/.minikube/addons for local assets ...
I0307 18:47:54.522953 26384 filesync.go:126] Scanning /home/jenkins/minikube-integration/15985-4052/.minikube/files for local assets ...
I0307 18:47:54.523037 26384 filesync.go:149] local asset: /home/jenkins/minikube-integration/15985-4052/.minikube/files/etc/ssl/certs/111062.pem -> 111062.pem in /etc/ssl/certs
I0307 18:47:54.523135 26384 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0307 18:47:54.530858 26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/files/etc/ssl/certs/111062.pem --> /etc/ssl/certs/111062.pem (1708 bytes)
I0307 18:47:54.553945 26384 start.go:303] post-start completed in 119.718718ms
I0307 18:47:54.553971 26384 fix.go:57] fixHost completed within 20.863395553s
I0307 18:47:54.553997 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHHostname
I0307 18:47:54.556837 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:54.557183 26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
I0307 18:47:54.557209 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:54.557405 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHPort
I0307 18:47:54.557590 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
I0307 18:47:54.557727 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
I0307 18:47:54.557837 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHUsername
I0307 18:47:54.558046 26384 main.go:141] libmachine: Using SSH client type: native
I0307 18:47:54.558428 26384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil> [] 0s} 192.168.39.212 22 <nil> <nil>}
I0307 18:47:54.558440 26384 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0307 18:47:54.666375 26384 main.go:141] libmachine: SSH cmd err, output: <nil>: 1678214874.615825414
I0307 18:47:54.666396 26384 fix.go:207] guest clock: 1678214874.615825414
I0307 18:47:54.666406 26384 fix.go:220] Guest: 2023-03-07 18:47:54.615825414 +0000 UTC Remote: 2023-03-07 18:47:54.553975557 +0000 UTC m=+46.403616421 (delta=61.849857ms)
I0307 18:47:54.666428 26384 fix.go:191] guest clock delta is within tolerance: 61.849857ms
I0307 18:47:54.666435 26384 start.go:83] releasing machines lock for "test-preload-203208", held for 20.975873468s
I0307 18:47:54.666460 26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
I0307 18:47:54.666725 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetIP
I0307 18:47:54.669426 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:54.669811 26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
I0307 18:47:54.669848 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:54.669973 26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
I0307 18:47:54.670422 26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
I0307 18:47:54.670589 26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
I0307 18:47:54.670656 26384 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0307 18:47:54.670718 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHHostname
I0307 18:47:54.670826 26384 ssh_runner.go:195] Run: cat /version.json
I0307 18:47:54.670851 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHHostname
I0307 18:47:54.673445 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:54.673511 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:54.673800 26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
I0307 18:47:54.673827 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:54.673938 26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
I0307 18:47:54.673967 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:54.674023 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHPort
I0307 18:47:54.674214 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
I0307 18:47:54.674218 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHPort
I0307 18:47:54.674394 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
I0307 18:47:54.674402 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHUsername
I0307 18:47:54.674565 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHUsername
I0307 18:47:54.674569 26384 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15985-4052/.minikube/machines/test-preload-203208/id_rsa Username:docker}
I0307 18:47:54.674704 26384 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15985-4052/.minikube/machines/test-preload-203208/id_rsa Username:docker}
I0307 18:47:54.759342 26384 ssh_runner.go:195] Run: systemctl --version
I0307 18:47:54.887421 26384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0307 18:47:54.893321 26384 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0307 18:47:54.893397 26384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0307 18:47:54.911277 26384 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0307 18:47:54.911299 26384 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
I0307 18:47:54.911409 26384 ssh_runner.go:195] Run: sudo crictl images --output json
I0307 18:47:58.947601 26384 ssh_runner.go:235] Completed: sudo crictl images --output json: (4.036162087s)
I0307 18:47:58.947737 26384 containerd.go:604] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
I0307 18:47:58.947802 26384 ssh_runner.go:195] Run: which lz4
I0307 18:47:58.951928 26384 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I0307 18:47:58.955886 26384 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0307 18:47:58.955917 26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458696921 bytes)
I0307 18:48:00.759696 26384 containerd.go:551] Took 1.807807 seconds to copy over tarball
I0307 18:48:00.759760 26384 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0307 18:48:03.914699 26384 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.15491167s)
I0307 18:48:03.914730 26384 containerd.go:558] Took 3.155008 seconds to extract the tarball
I0307 18:48:03.914761 26384 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0307 18:48:03.954806 26384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0307 18:48:04.051307 26384 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0307 18:48:04.067055 26384 start.go:485] detecting cgroup driver to use...
I0307 18:48:04.067143 26384 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0307 18:48:06.737555 26384 ssh_runner.go:235] Completed: sudo systemctl stop -f crio: (2.670382401s)
I0307 18:48:06.737634 26384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0307 18:48:06.749559 26384 docker.go:186] disabling cri-docker service (if available) ...
I0307 18:48:06.749615 26384 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0307 18:48:06.761329 26384 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0307 18:48:06.773038 26384 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0307 18:48:06.870678 26384 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0307 18:48:06.979667 26384 docker.go:202] disabling docker service ...
I0307 18:48:06.979735 26384 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0307 18:48:06.992492 26384 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0307 18:48:07.004415 26384 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0307 18:48:07.107126 26384 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0307 18:48:07.218342 26384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0307 18:48:07.230717 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0307 18:48:07.248387 26384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.7"|' /etc/containerd/config.toml"
I0307 18:48:07.257036 26384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0307 18:48:07.266682 26384 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0307 18:48:07.266740 26384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0307 18:48:07.276084 26384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0307 18:48:07.285768 26384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0307 18:48:07.295044 26384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0307 18:48:07.304543 26384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0307 18:48:07.314540 26384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0307 18:48:07.324106 26384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0307 18:48:07.332553 26384 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0307 18:48:07.332592 26384 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0307 18:48:07.345783 26384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0307 18:48:07.354423 26384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0307 18:48:07.450860 26384 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0307 18:48:07.472878 26384 start.go:532] Will wait 60s for socket path /run/containerd/containerd.sock
I0307 18:48:07.472979 26384 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0307 18:48:07.480739 26384 retry.go:31] will retry after 1.355526534s: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0307 18:48:08.836380 26384 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0307 18:48:08.842045 26384 start.go:553] Will wait 60s for crictl version
I0307 18:48:08.842108 26384 ssh_runner.go:195] Run: which crictl
I0307 18:48:08.846136 26384 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0307 18:48:08.879500 26384 start.go:569] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.6.18
RuntimeApiVersion: v1alpha2
I0307 18:48:08.879555 26384 ssh_runner.go:195] Run: containerd --version
I0307 18:48:08.907039 26384 ssh_runner.go:195] Run: containerd --version
I0307 18:48:08.937824 26384 out.go:177] * Preparing Kubernetes v1.24.4 on containerd 1.6.18 ...
I0307 18:48:08.939189 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetIP
I0307 18:48:08.941766 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:48:08.942253 26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
I0307 18:48:08.942274 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:48:08.942470 26384 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0307 18:48:08.946333 26384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0307 18:48:08.958372 26384 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
I0307 18:48:08.958447 26384 ssh_runner.go:195] Run: sudo crictl images --output json
I0307 18:48:08.984433 26384 containerd.go:608] all images are preloaded for containerd runtime.
I0307 18:48:08.984454 26384 containerd.go:522] Images already preloaded, skipping extraction
I0307 18:48:08.984503 26384 ssh_runner.go:195] Run: sudo crictl images --output json
I0307 18:48:09.011132 26384 containerd.go:608] all images are preloaded for containerd runtime.
I0307 18:48:09.011156 26384 cache_images.go:84] Images are preloaded, skipping loading
I0307 18:48:09.011204 26384 ssh_runner.go:195] Run: sudo crictl info
I0307 18:48:09.039874 26384 cni.go:84] Creating CNI manager for ""
I0307 18:48:09.039898 26384 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
I0307 18:48:09.039907 26384 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0307 18:48:09.039928 26384 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.212 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-203208 NodeName:test-preload-203208 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.212"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.212 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0307 18:48:09.040095 26384 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.212
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "test-preload-203208"
kubeletExtraArgs:
node-ip: 192.168.39.212
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.212"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.24.4
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0307 18:48:09.040202 26384 kubeadm.go:968] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=test-preload-203208 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.212
[Install]
config:
{KubernetesVersion:v1.24.4 ClusterName:test-preload-203208 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0307 18:48:09.040264 26384 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
I0307 18:48:09.049030 26384 binaries.go:44] Found k8s binaries, skipping transfer
I0307 18:48:09.049088 26384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0307 18:48:09.057226 26384 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (484 bytes)
I0307 18:48:09.073102 26384 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0307 18:48:09.087939 26384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2107 bytes)
I0307 18:48:09.103091 26384 ssh_runner.go:195] Run: grep 192.168.39.212 control-plane.minikube.internal$ /etc/hosts
I0307 18:48:09.106714 26384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.212 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0307 18:48:09.118609 26384 certs.go:56] Setting up /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208 for IP: 192.168.39.212
I0307 18:48:09.118642 26384 certs.go:186] acquiring lock for shared ca certs: {Name:mk07c09235b5b83043c0b2b2f22c2249661f377a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0307 18:48:09.118791 26384 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15985-4052/.minikube/ca.key
I0307 18:48:09.118849 26384 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15985-4052/.minikube/proxy-client-ca.key
I0307 18:48:09.118912 26384 certs.go:311] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/client.key
I0307 18:48:09.118967 26384 certs.go:311] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/apiserver.key.543da273
I0307 18:48:09.119053 26384 certs.go:311] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/proxy-client.key
I0307 18:48:09.119150 26384 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4052/.minikube/certs/home/jenkins/minikube-integration/15985-4052/.minikube/certs/11106.pem (1338 bytes)
W0307 18:48:09.119182 26384 certs.go:397] ignoring /home/jenkins/minikube-integration/15985-4052/.minikube/certs/home/jenkins/minikube-integration/15985-4052/.minikube/certs/11106_empty.pem, impossibly tiny 0 bytes
I0307 18:48:09.119193 26384 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4052/.minikube/certs/home/jenkins/minikube-integration/15985-4052/.minikube/certs/ca-key.pem (1679 bytes)
I0307 18:48:09.119222 26384 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4052/.minikube/certs/home/jenkins/minikube-integration/15985-4052/.minikube/certs/ca.pem (1078 bytes)
I0307 18:48:09.119259 26384 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4052/.minikube/certs/home/jenkins/minikube-integration/15985-4052/.minikube/certs/cert.pem (1123 bytes)
I0307 18:48:09.119296 26384 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4052/.minikube/certs/home/jenkins/minikube-integration/15985-4052/.minikube/certs/key.pem (1679 bytes)
I0307 18:48:09.119354 26384 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4052/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15985-4052/.minikube/files/etc/ssl/certs/111062.pem (1708 bytes)
I0307 18:48:09.119887 26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0307 18:48:09.142561 26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0307 18:48:09.164647 26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0307 18:48:09.186856 26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0307 18:48:09.209055 26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0307 18:48:09.233821 26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0307 18:48:09.256607 26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0307 18:48:09.279276 26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0307 18:48:09.301654 26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0307 18:48:09.323040 26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/certs/11106.pem --> /usr/share/ca-certificates/11106.pem (1338 bytes)
I0307 18:48:09.344849 26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/files/etc/ssl/certs/111062.pem --> /usr/share/ca-certificates/111062.pem (1708 bytes)
I0307 18:48:09.366857 26384 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0307 18:48:09.382598 26384 ssh_runner.go:195] Run: openssl version
I0307 18:48:09.387988 26384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0307 18:48:09.396852 26384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0307 18:48:09.401359 26384 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 7 18:03 /usr/share/ca-certificates/minikubeCA.pem
I0307 18:48:09.401436 26384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0307 18:48:09.406740 26384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0307 18:48:09.415682 26384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11106.pem && ln -fs /usr/share/ca-certificates/11106.pem /etc/ssl/certs/11106.pem"
I0307 18:48:09.424547 26384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11106.pem
I0307 18:48:09.428975 26384 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 7 18:09 /usr/share/ca-certificates/11106.pem
I0307 18:48:09.429015 26384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11106.pem
I0307 18:48:09.434193 26384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11106.pem /etc/ssl/certs/51391683.0"
I0307 18:48:09.443361 26384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111062.pem && ln -fs /usr/share/ca-certificates/111062.pem /etc/ssl/certs/111062.pem"
I0307 18:48:09.452688 26384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111062.pem
I0307 18:48:09.457057 26384 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 7 18:09 /usr/share/ca-certificates/111062.pem
I0307 18:48:09.457108 26384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111062.pem
I0307 18:48:09.462237 26384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111062.pem /etc/ssl/certs/3ec20f2e.0"
I0307 18:48:09.471411 26384 kubeadm.go:401] StartCluster: {Name:test-preload-203208 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.24.4 ClusterName:test-preload-203208 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0307 18:48:09.471554 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0307 18:48:09.471596 26384 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0307 18:48:09.501095 26384 cri.go:87] found id: ""
I0307 18:48:09.501172 26384 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0307 18:48:09.510140 26384 kubeadm.go:416] found existing configuration files, will attempt cluster restart
I0307 18:48:09.510163 26384 kubeadm.go:633] restartCluster start
I0307 18:48:09.510218 26384 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0307 18:48:09.518643 26384 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0307 18:48:09.519032 26384 kubeconfig.go:135] verify returned: extract IP: "test-preload-203208" does not appear in /home/jenkins/minikube-integration/15985-4052/kubeconfig
I0307 18:48:09.519129 26384 kubeconfig.go:146] "test-preload-203208" context is missing from /home/jenkins/minikube-integration/15985-4052/kubeconfig - will repair!
I0307 18:48:09.519386 26384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15985-4052/kubeconfig: {Name:mk89c8bdc0292c804b7314ba2438e95e1215b3b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0307 18:48:09.519958 26384 kapi.go:59] client config for test-preload-203208: &rest.Config{Host:"https://192.168.39.212:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/client.crt", KeyFile:"/home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/client.key", CAFile:"/home/jenkins/minikube-integration/15985-4052/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29a5480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0307 18:48:09.520801 26384 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0307 18:48:09.528914 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:09.528956 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:09.538990 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:10.039696 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:10.039767 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:10.050769 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:10.539371 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:10.539470 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:10.550785 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:11.039988 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:11.040093 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:11.051278 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:11.539936 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:11.540040 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:11.551371 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:12.040000 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:12.040077 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:12.051583 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:12.539114 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:12.539176 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:12.550419 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:13.040079 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:13.040172 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:13.052432 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:13.540058 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:13.540141 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:13.551703 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:14.039765 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:14.039847 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:14.051403 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:14.540016 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:14.540094 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:14.552136 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:15.039754 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:15.039852 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:15.051397 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:15.539956 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:15.540068 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:15.551741 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:16.039191 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:16.039261 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:16.050954 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:16.539468 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:16.539533 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:16.550947 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:17.039455 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:17.039523 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:17.050527 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:17.539123 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:17.539207 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:17.551333 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:18.039916 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:18.039999 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:18.051774 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:18.539677 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:18.539783 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:18.551481 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:19.039543 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:19.039622 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:19.051157 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:19.539906 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:19.539971 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:19.551522 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:19.551546 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:19.551615 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:19.562103 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:19.562127 26384 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
I0307 18:48:19.562135 26384 kubeadm.go:1120] stopping kube-system containers ...
I0307 18:48:19.562145 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I0307 18:48:19.562200 26384 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0307 18:48:19.596473 26384 cri.go:87] found id: ""
I0307 18:48:19.596545 26384 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0307 18:48:19.611484 26384 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0307 18:48:19.620277 26384 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0307 18:48:19.620347 26384 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0307 18:48:19.629402 26384 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0307 18:48:19.629420 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0307 18:48:19.729048 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0307 18:48:20.693486 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0307 18:48:21.045927 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0307 18:48:21.125427 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0307 18:48:21.208989 26384 api_server.go:51] waiting for apiserver process to appear ...
I0307 18:48:21.209053 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:21.727096 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:22.226678 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:22.726635 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:23.227460 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:23.726652 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:24.226895 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:24.727601 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:25.227632 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:25.727342 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:26.226885 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:26.727250 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:27.226755 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:27.727168 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:28.227623 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:28.726792 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:29.227535 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:29.727199 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:30.227533 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:30.726863 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:31.226913 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:31.726742 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:32.226629 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:32.726562 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:33.227256 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:33.727095 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:34.227636 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:34.727529 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:35.226672 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:35.239643 26384 api_server.go:71] duration metric: took 14.030659958s to wait for apiserver process to appear ...
I0307 18:48:35.239673 26384 api_server.go:87] waiting for apiserver healthz status ...
I0307 18:48:35.239689 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:48:40.240554 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0307 18:48:40.741289 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:48:45.742137 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0307 18:48:46.240766 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:48:51.241530 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0307 18:48:51.740794 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:48:55.622725 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": read tcp 192.168.39.1:40614->192.168.39.212:8443: read: connection reset by peer
I0307 18:48:55.741069 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:48:55.741730 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:48:56.241350 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:48:56.241974 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:48:56.741625 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:48:56.742311 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:48:57.240872 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:48:57.241486 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:48:57.741098 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:48:57.741815 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:48:58.240688 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:48:58.241449 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:48:58.740916 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:48:58.741450 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:48:59.241002 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:48:59.241562 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:48:59.741376 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:48:59.741967 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:00.241554 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:00.242185 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:00.740765 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:00.741366 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:01.240922 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:01.241524 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:01.741093 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:01.741672 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:02.241289 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:02.241821 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:02.741466 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:02.742055 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:03.240707 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:03.241321 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:03.741112 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:03.741706 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:04.241289 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:04.241805 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:04.741475 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:04.742120 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:05.240659 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:05.241205 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:05.740827 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:05.741407 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:06.240957 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:06.241520 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:06.741097 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:06.741687 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:07.241323 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:07.241898 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:07.741557 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:07.742492 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:08.241389 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:08.242007 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:08.741481 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:08.742046 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:09.240755 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:09.241344 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:09.741175 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:09.741776 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:10.241384 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:10.242065 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:10.741689 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:10.742367 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:11.240908 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:11.241508 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:11.741066 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:11.741702 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:12.241340 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:12.241992 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:12.741591 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:12.742200 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:13.240991 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:13.241618 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:13.741474 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:13.742095 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:14.240668 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:14.241302 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:14.740851 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:14.741426 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:15.240983 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:15.241592 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:15.741169 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:15.741706 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:16.241315 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:16.241927 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:16.741520 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:16.742200 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:17.240744 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:17.241351 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:17.740916 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:22.742180 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0307 18:49:23.240982 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:28.241459 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0307 18:49:28.740696 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:33.740940 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0307 18:49:34.241557 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:37.998029 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": read tcp 192.168.39.1:36774->192.168.39.212:8443: read: connection reset by peer
I0307 18:49:38.240706 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:49:38.240797 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:49:38.274793 26384 cri.go:87] found id: "fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
I0307 18:49:38.274811 26384 cri.go:87] found id: "5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a"
I0307 18:49:38.274816 26384 cri.go:87] found id: ""
I0307 18:49:38.274822 26384 logs.go:277] 2 containers: [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5 5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a]
I0307 18:49:38.274884 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:38.279183 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:38.283139 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:49:38.283194 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:49:38.310826 26384 cri.go:87] found id: "33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
I0307 18:49:38.310844 26384 cri.go:87] found id: ""
I0307 18:49:38.310850 26384 logs.go:277] 1 containers: [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7]
I0307 18:49:38.310891 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:38.314471 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:49:38.314538 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:49:38.344851 26384 cri.go:87] found id: ""
I0307 18:49:38.344881 26384 logs.go:277] 0 containers: []
W0307 18:49:38.344889 26384 logs.go:279] No container was found matching "coredns"
I0307 18:49:38.344894 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:49:38.344965 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:49:38.377525 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:49:38.377548 26384 cri.go:87] found id: ""
I0307 18:49:38.377555 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:49:38.377609 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:38.381815 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:49:38.381869 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:49:38.417825 26384 cri.go:87] found id: ""
I0307 18:49:38.417845 26384 logs.go:277] 0 containers: []
W0307 18:49:38.417851 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:49:38.417855 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:49:38.417925 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:49:38.454042 26384 cri.go:87] found id: "476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
I0307 18:49:38.454062 26384 cri.go:87] found id: "a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4"
I0307 18:49:38.454066 26384 cri.go:87] found id: ""
I0307 18:49:38.454073 26384 logs.go:277] 2 containers: [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6 a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4]
I0307 18:49:38.454130 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:38.458203 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:38.461976 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:49:38.462036 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:49:38.498530 26384 cri.go:87] found id: ""
I0307 18:49:38.498555 26384 logs.go:277] 0 containers: []
W0307 18:49:38.498566 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:49:38.498573 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:49:38.498623 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:49:38.545888 26384 cri.go:87] found id: ""
I0307 18:49:38.545918 26384 logs.go:277] 0 containers: []
W0307 18:49:38.545926 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:49:38.545936 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:49:38.545952 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:49:38.596180 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:49:38.596211 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:49:38.657673 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:49:38.657718 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:49:38.670963 26384 logs.go:123] Gathering logs for kube-apiserver [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5] ...
I0307 18:49:38.670998 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
I0307 18:49:38.710963 26384 logs.go:123] Gathering logs for kube-apiserver [5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a] ...
I0307 18:49:38.710992 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a"
W0307 18:49:38.740233 26384 logs.go:130] failed kube-apiserver [5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a": Process exited with status 1
stdout:
stderr:
E0307 18:49:38.717772 1569 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a\": not found" containerID="5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a"
time="2023-03-07T18:49:38Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a\": not found"
output:
** stderr **
E0307 18:49:38.717772 1569 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a\": not found" containerID="5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a"
time="2023-03-07T18:49:38Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a\": not found"
** /stderr **
I0307 18:49:38.740259 26384 logs.go:123] Gathering logs for etcd [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7] ...
I0307 18:49:38.740272 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
I0307 18:49:38.769176 26384 logs.go:123] Gathering logs for kube-controller-manager [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6] ...
I0307 18:49:38.769208 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
I0307 18:49:38.816001 26384 logs.go:123] Gathering logs for kube-controller-manager [a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4] ...
I0307 18:49:38.816029 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4"
W0307 18:49:38.847807 26384 logs.go:130] failed kube-controller-manager [a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4": Process exited with status 1
stdout:
stderr:
E0307 18:49:38.825690 1584 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4\": not found" containerID="a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4"
time="2023-03-07T18:49:38Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4\": not found"
output:
** stderr **
E0307 18:49:38.825690 1584 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4\": not found" containerID="a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4"
time="2023-03-07T18:49:38Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4\": not found"
** /stderr **
I0307 18:49:38.847829 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:49:38.847839 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:49:38.960358 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:49:38.960378 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:49:38.960391 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:49:39.024178 26384 logs.go:123] Gathering logs for container status ...
I0307 18:49:39.024209 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:49:41.561116 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:41.561705 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:41.741078 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:49:41.741163 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:49:41.770944 26384 cri.go:87] found id: "fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
I0307 18:49:41.770967 26384 cri.go:87] found id: ""
I0307 18:49:41.770975 26384 logs.go:277] 1 containers: [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5]
I0307 18:49:41.771032 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:41.774913 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:49:41.774977 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:49:41.802816 26384 cri.go:87] found id: "33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
I0307 18:49:41.802838 26384 cri.go:87] found id: ""
I0307 18:49:41.802847 26384 logs.go:277] 1 containers: [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7]
I0307 18:49:41.802892 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:41.806570 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:49:41.806610 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:49:41.835237 26384 cri.go:87] found id: ""
I0307 18:49:41.835270 26384 logs.go:277] 0 containers: []
W0307 18:49:41.835276 26384 logs.go:279] No container was found matching "coredns"
I0307 18:49:41.835281 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:49:41.835337 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:49:41.870305 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:49:41.870323 26384 cri.go:87] found id: ""
I0307 18:49:41.870329 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:49:41.870376 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:41.874332 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:49:41.874383 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:49:41.901971 26384 cri.go:87] found id: ""
I0307 18:49:41.901993 26384 logs.go:277] 0 containers: []
W0307 18:49:41.901999 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:49:41.902005 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:49:41.902057 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:49:41.929792 26384 cri.go:87] found id: "476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
I0307 18:49:41.929823 26384 cri.go:87] found id: ""
I0307 18:49:41.929834 26384 logs.go:277] 1 containers: [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6]
I0307 18:49:41.929885 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:41.933861 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:49:41.933945 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:49:41.962195 26384 cri.go:87] found id: ""
I0307 18:49:41.962222 26384 logs.go:277] 0 containers: []
W0307 18:49:41.962230 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:49:41.962237 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:49:41.962290 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:49:41.990939 26384 cri.go:87] found id: ""
I0307 18:49:41.990965 26384 logs.go:277] 0 containers: []
W0307 18:49:41.990972 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:49:41.990984 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:49:41.990994 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:49:42.052031 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:49:42.052054 26384 logs.go:123] Gathering logs for kube-apiserver [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5] ...
I0307 18:49:42.052069 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
I0307 18:49:42.081594 26384 logs.go:123] Gathering logs for etcd [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7] ...
I0307 18:49:42.081622 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
I0307 18:49:42.109456 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:49:42.109493 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:49:42.177139 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:49:42.177180 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:49:42.226652 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:49:42.226679 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:49:42.287629 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:49:42.287659 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:49:42.299095 26384 logs.go:123] Gathering logs for kube-controller-manager [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6] ...
I0307 18:49:42.299115 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
I0307 18:49:42.340655 26384 logs.go:123] Gathering logs for container status ...
I0307 18:49:42.340684 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:49:44.881007 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:44.881568 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:45.241058 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:49:45.241130 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:49:45.268565 26384 cri.go:87] found id: "fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
I0307 18:49:45.268588 26384 cri.go:87] found id: ""
I0307 18:49:45.268596 26384 logs.go:277] 1 containers: [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5]
I0307 18:49:45.268650 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:45.272618 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:49:45.272685 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:49:45.299447 26384 cri.go:87] found id: "33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
I0307 18:49:45.299471 26384 cri.go:87] found id: ""
I0307 18:49:45.299479 26384 logs.go:277] 1 containers: [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7]
I0307 18:49:45.299528 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:45.303332 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:49:45.303397 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:49:45.332836 26384 cri.go:87] found id: ""
I0307 18:49:45.332863 26384 logs.go:277] 0 containers: []
W0307 18:49:45.332873 26384 logs.go:279] No container was found matching "coredns"
I0307 18:49:45.332881 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:49:45.332989 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:49:45.359776 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:49:45.359795 26384 cri.go:87] found id: ""
I0307 18:49:45.359805 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:49:45.359864 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:45.363663 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:49:45.363725 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:49:45.389419 26384 cri.go:87] found id: ""
I0307 18:49:45.389448 26384 logs.go:277] 0 containers: []
W0307 18:49:45.389459 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:49:45.389465 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:49:45.389523 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:49:45.415773 26384 cri.go:87] found id: "476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
I0307 18:49:45.415796 26384 cri.go:87] found id: ""
I0307 18:49:45.415804 26384 logs.go:277] 1 containers: [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6]
I0307 18:49:45.415860 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:45.419687 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:49:45.419754 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:49:45.448748 26384 cri.go:87] found id: ""
I0307 18:49:45.448777 26384 logs.go:277] 0 containers: []
W0307 18:49:45.448786 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:49:45.448791 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:49:45.448854 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:49:45.474641 26384 cri.go:87] found id: ""
I0307 18:49:45.474669 26384 logs.go:277] 0 containers: []
W0307 18:49:45.474679 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:49:45.474696 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:49:45.474711 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:49:45.486226 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:49:45.486249 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:49:45.545694 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:49:45.545714 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:49:45.545726 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:49:45.591466 26384 logs.go:123] Gathering logs for container status ...
I0307 18:49:45.591493 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:49:45.623810 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:49:45.623841 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:49:45.686240 26384 logs.go:123] Gathering logs for kube-apiserver [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5] ...
I0307 18:49:45.686268 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
I0307 18:49:45.720278 26384 logs.go:123] Gathering logs for etcd [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7] ...
I0307 18:49:45.720302 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
I0307 18:49:45.745876 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:49:45.745913 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:49:45.809485 26384 logs.go:123] Gathering logs for kube-controller-manager [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6] ...
I0307 18:49:45.809518 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
I0307 18:49:48.348770 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:48.349502 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:48.741584 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:49:48.741651 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:49:48.777550 26384 cri.go:87] found id: "fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
I0307 18:49:48.777572 26384 cri.go:87] found id: ""
I0307 18:49:48.777578 26384 logs.go:277] 1 containers: [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5]
I0307 18:49:48.777636 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:48.782172 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:49:48.782233 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:49:48.818792 26384 cri.go:87] found id: "33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
I0307 18:49:48.818817 26384 cri.go:87] found id: ""
I0307 18:49:48.818824 26384 logs.go:277] 1 containers: [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7]
I0307 18:49:48.818869 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:48.823044 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:49:48.823106 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:49:48.857459 26384 cri.go:87] found id: ""
I0307 18:49:48.857484 26384 logs.go:277] 0 containers: []
W0307 18:49:48.857491 26384 logs.go:279] No container was found matching "coredns"
I0307 18:49:48.857498 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:49:48.857556 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:49:48.889707 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:49:48.889728 26384 cri.go:87] found id: ""
I0307 18:49:48.889735 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:49:48.889778 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:48.894345 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:49:48.894420 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:49:48.933590 26384 cri.go:87] found id: ""
I0307 18:49:48.933610 26384 logs.go:277] 0 containers: []
W0307 18:49:48.933617 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:49:48.933622 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:49:48.933667 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:49:48.967476 26384 cri.go:87] found id: "1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
I0307 18:49:48.967495 26384 cri.go:87] found id: "476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
I0307 18:49:48.967499 26384 cri.go:87] found id: ""
I0307 18:49:48.967506 26384 logs.go:277] 2 containers: [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6]
I0307 18:49:48.967549 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:48.971759 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:48.975656 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:49:48.975714 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:49:49.026784 26384 cri.go:87] found id: ""
I0307 18:49:49.026821 26384 logs.go:277] 0 containers: []
W0307 18:49:49.026831 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:49:49.026839 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:49:49.026900 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:49:49.055435 26384 cri.go:87] found id: ""
I0307 18:49:49.055458 26384 logs.go:277] 0 containers: []
W0307 18:49:49.055465 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:49:49.055476 26384 logs.go:123] Gathering logs for container status ...
I0307 18:49:49.055490 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:49:49.089020 26384 logs.go:123] Gathering logs for kube-controller-manager [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6] ...
I0307 18:49:49.089048 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
I0307 18:49:49.138877 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:49:49.138913 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:49:49.153088 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:49:49.153113 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:49:49.220054 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:49:49.220079 26384 logs.go:123] Gathering logs for kube-apiserver [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5] ...
I0307 18:49:49.220098 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
I0307 18:49:49.260102 26384 logs.go:123] Gathering logs for etcd [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7] ...
I0307 18:49:49.260132 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
I0307 18:49:49.288829 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:49:49.288855 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:49:49.360373 26384 logs.go:123] Gathering logs for kube-controller-manager [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611] ...
I0307 18:49:49.360411 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
I0307 18:49:49.390432 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:49:49.390471 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:49:49.438326 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:49:49.438360 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:49:51.999825 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:52.000476 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:52.240790 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:49:52.240869 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:49:52.268760 26384 cri.go:87] found id: "fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
I0307 18:49:52.268782 26384 cri.go:87] found id: ""
I0307 18:49:52.268790 26384 logs.go:277] 1 containers: [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5]
I0307 18:49:52.268860 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:52.273290 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:49:52.273355 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:49:52.303004 26384 cri.go:87] found id: "33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
I0307 18:49:52.303024 26384 cri.go:87] found id: ""
I0307 18:49:52.303031 26384 logs.go:277] 1 containers: [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7]
I0307 18:49:52.303070 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:52.307394 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:49:52.307454 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:49:52.334227 26384 cri.go:87] found id: ""
I0307 18:49:52.334252 26384 logs.go:277] 0 containers: []
W0307 18:49:52.334259 26384 logs.go:279] No container was found matching "coredns"
I0307 18:49:52.334263 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:49:52.334308 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:49:52.365944 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:49:52.365964 26384 cri.go:87] found id: ""
I0307 18:49:52.365971 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:49:52.366014 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:52.369575 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:49:52.369631 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:49:52.399970 26384 cri.go:87] found id: ""
I0307 18:49:52.399998 26384 logs.go:277] 0 containers: []
W0307 18:49:52.400008 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:49:52.400015 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:49:52.400080 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:49:52.428372 26384 cri.go:87] found id: "1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
I0307 18:49:52.428394 26384 cri.go:87] found id: "476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
I0307 18:49:52.428399 26384 cri.go:87] found id: ""
I0307 18:49:52.428404 26384 logs.go:277] 2 containers: [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6]
I0307 18:49:52.428452 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:52.432426 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:52.436419 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:49:52.436468 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:49:52.465745 26384 cri.go:87] found id: ""
I0307 18:49:52.465777 26384 logs.go:277] 0 containers: []
W0307 18:49:52.465786 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:49:52.465794 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:49:52.465851 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:49:52.493993 26384 cri.go:87] found id: ""
I0307 18:49:52.494022 26384 logs.go:277] 0 containers: []
W0307 18:49:52.494032 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:49:52.494048 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:49:52.494063 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:49:52.562310 26384 logs.go:123] Gathering logs for container status ...
I0307 18:49:52.562349 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:49:52.601842 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:49:52.601867 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:49:52.663702 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:49:52.663735 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:49:52.676175 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:49:52.676205 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:49:52.725457 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:49:52.725478 26384 logs.go:123] Gathering logs for kube-controller-manager [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6] ...
I0307 18:49:52.725491 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
I0307 18:49:52.773421 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:49:52.773446 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:49:52.820180 26384 logs.go:123] Gathering logs for kube-apiserver [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5] ...
I0307 18:49:52.820212 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
I0307 18:49:52.854035 26384 logs.go:123] Gathering logs for etcd [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7] ...
I0307 18:49:52.854060 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
I0307 18:49:52.882963 26384 logs.go:123] Gathering logs for kube-controller-manager [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611] ...
I0307 18:49:52.882993 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
I0307 18:49:55.412727 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:55.413292 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:55.740694 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:49:55.740782 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:49:55.769593 26384 cri.go:87] found id: "fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
I0307 18:49:55.769617 26384 cri.go:87] found id: ""
I0307 18:49:55.769624 26384 logs.go:277] 1 containers: [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5]
I0307 18:49:55.769675 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:55.773846 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:49:55.773918 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:49:55.799820 26384 cri.go:87] found id: "33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
I0307 18:49:55.799844 26384 cri.go:87] found id: ""
I0307 18:49:55.799852 26384 logs.go:277] 1 containers: [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7]
I0307 18:49:55.799904 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:55.803655 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:49:55.803714 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:49:55.830795 26384 cri.go:87] found id: ""
I0307 18:49:55.830820 26384 logs.go:277] 0 containers: []
W0307 18:49:55.830829 26384 logs.go:279] No container was found matching "coredns"
I0307 18:49:55.830840 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:49:55.830892 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:49:55.861486 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:49:55.861511 26384 cri.go:87] found id: ""
I0307 18:49:55.861519 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:49:55.861571 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:55.865664 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:49:55.865712 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:49:55.892035 26384 cri.go:87] found id: ""
I0307 18:49:55.892057 26384 logs.go:277] 0 containers: []
W0307 18:49:55.892067 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:49:55.892074 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:49:55.892122 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:49:55.921473 26384 cri.go:87] found id: "1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
I0307 18:49:55.921491 26384 cri.go:87] found id: "476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
I0307 18:49:55.921503 26384 cri.go:87] found id: ""
I0307 18:49:55.921511 26384 logs.go:277] 2 containers: [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6]
I0307 18:49:55.921560 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:55.925654 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:55.929475 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:49:55.929539 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:49:55.956526 26384 cri.go:87] found id: ""
I0307 18:49:55.956559 26384 logs.go:277] 0 containers: []
W0307 18:49:55.956566 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:49:55.956571 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:49:55.956614 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:49:55.983852 26384 cri.go:87] found id: ""
I0307 18:49:55.983873 26384 logs.go:277] 0 containers: []
W0307 18:49:55.983879 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:49:55.983891 26384 logs.go:123] Gathering logs for kube-controller-manager [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611] ...
I0307 18:49:55.983905 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
I0307 18:49:56.013373 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:49:56.013404 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:49:56.075477 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:49:56.075514 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:49:56.134932 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:49:56.134953 26384 logs.go:123] Gathering logs for etcd [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7] ...
I0307 18:49:56.134963 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
I0307 18:49:56.162676 26384 logs.go:123] Gathering logs for kube-controller-manager [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6] ...
I0307 18:49:56.162702 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
I0307 18:49:56.205835 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:49:56.205864 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:49:56.254193 26384 logs.go:123] Gathering logs for container status ...
I0307 18:49:56.254226 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:49:56.291170 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:49:56.291199 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:49:56.303219 26384 logs.go:123] Gathering logs for kube-apiserver [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5] ...
I0307 18:49:56.303244 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
I0307 18:49:56.338501 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:49:56.338530 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:49:58.906800 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:58.907377 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:59.240745 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:49:59.240816 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:49:59.270117 26384 cri.go:87] found id: "fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
I0307 18:49:59.270138 26384 cri.go:87] found id: ""
I0307 18:49:59.270148 26384 logs.go:277] 1 containers: [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5]
I0307 18:49:59.270194 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:59.277486 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:49:59.277555 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:49:59.319990 26384 cri.go:87] found id: "33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
I0307 18:49:59.320008 26384 cri.go:87] found id: ""
I0307 18:49:59.320015 26384 logs.go:277] 1 containers: [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7]
I0307 18:49:59.320056 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:59.324577 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:49:59.324620 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:49:59.355279 26384 cri.go:87] found id: ""
I0307 18:49:59.355308 26384 logs.go:277] 0 containers: []
W0307 18:49:59.355318 26384 logs.go:279] No container was found matching "coredns"
I0307 18:49:59.355325 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:49:59.355383 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:49:59.385970 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:49:59.386019 26384 cri.go:87] found id: ""
I0307 18:49:59.386029 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:49:59.386084 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:59.389898 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:49:59.389957 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:49:59.418100 26384 cri.go:87] found id: ""
I0307 18:49:59.418123 26384 logs.go:277] 0 containers: []
W0307 18:49:59.418132 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:49:59.418141 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:49:59.418199 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:49:59.448963 26384 cri.go:87] found id: "1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
I0307 18:49:59.448984 26384 cri.go:87] found id: "476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
I0307 18:49:59.448990 26384 cri.go:87] found id: ""
I0307 18:49:59.448998 26384 logs.go:277] 2 containers: [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6]
I0307 18:49:59.449053 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:59.452973 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:59.456699 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:49:59.456745 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:49:59.487041 26384 cri.go:87] found id: ""
I0307 18:49:59.487066 26384 logs.go:277] 0 containers: []
W0307 18:49:59.487075 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:49:59.487081 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:49:59.487141 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:49:59.520702 26384 cri.go:87] found id: ""
I0307 18:49:59.520733 26384 logs.go:277] 0 containers: []
W0307 18:49:59.520744 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:49:59.520756 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:49:59.520770 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:49:59.534981 26384 logs.go:123] Gathering logs for kube-apiserver [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5] ...
I0307 18:49:59.535020 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
I0307 18:49:59.571150 26384 logs.go:123] Gathering logs for etcd [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7] ...
I0307 18:49:59.571176 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
I0307 18:49:59.608785 26384 logs.go:123] Gathering logs for kube-controller-manager [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6] ...
I0307 18:49:59.608815 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
W0307 18:49:59.635030 26384 logs.go:130] failed kube-controller-manager [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6": Process exited with status 1
stdout:
stderr:
E0307 18:49:59.613980 2152 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6\": not found" containerID="476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
time="2023-03-07T18:49:59Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6\": not found"
output:
** stderr **
E0307 18:49:59.613980 2152 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6\": not found" containerID="476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
time="2023-03-07T18:49:59Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6\": not found"
** /stderr **
I0307 18:49:59.635047 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:49:59.635057 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:49:59.681919 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:49:59.681947 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:49:59.738173 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:49:59.738205 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:49:59.789970 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:49:59.789991 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:49:59.790005 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:49:59.859269 26384 logs.go:123] Gathering logs for kube-controller-manager [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611] ...
I0307 18:49:59.859302 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
I0307 18:49:59.901677 26384 logs.go:123] Gathering logs for container status ...
I0307 18:49:59.901708 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:50:02.439332 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:50:07.439703 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0307 18:50:07.741227 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:50:07.741304 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:50:07.771935 26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:07.771958 26384 cri.go:87] found id: "fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
I0307 18:50:07.771964 26384 cri.go:87] found id: ""
I0307 18:50:07.771972 26384 logs.go:277] 2 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5]
I0307 18:50:07.772033 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:07.775931 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:07.779533 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:50:07.779583 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:50:07.807355 26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:07.807372 26384 cri.go:87] found id: "33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
I0307 18:50:07.807376 26384 cri.go:87] found id: ""
I0307 18:50:07.807382 26384 logs.go:277] 2 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7]
I0307 18:50:07.807423 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:07.810941 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:07.814428 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:50:07.814480 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:50:07.840502 26384 cri.go:87] found id: ""
I0307 18:50:07.840530 26384 logs.go:277] 0 containers: []
W0307 18:50:07.840537 26384 logs.go:279] No container was found matching "coredns"
I0307 18:50:07.840543 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:50:07.840590 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:50:07.872460 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:07.872482 26384 cri.go:87] found id: ""
I0307 18:50:07.872490 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:50:07.872532 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:07.876167 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:50:07.876234 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:50:07.902163 26384 cri.go:87] found id: ""
I0307 18:50:07.902185 26384 logs.go:277] 0 containers: []
W0307 18:50:07.902194 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:50:07.902203 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:50:07.902264 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:50:07.934206 26384 cri.go:87] found id: "1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
I0307 18:50:07.934234 26384 cri.go:87] found id: ""
I0307 18:50:07.934244 26384 logs.go:277] 1 containers: [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611]
I0307 18:50:07.934302 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:07.937973 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:50:07.938062 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:50:07.969362 26384 cri.go:87] found id: ""
I0307 18:50:07.969395 26384 logs.go:277] 0 containers: []
W0307 18:50:07.969406 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:50:07.969413 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:50:07.969476 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:50:07.996288 26384 cri.go:87] found id: ""
I0307 18:50:07.996313 26384 logs.go:277] 0 containers: []
W0307 18:50:07.996322 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:50:07.996332 26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
I0307 18:50:07.996346 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:08.022863 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:50:08.022893 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:50:08.072434 26384 logs.go:123] Gathering logs for container status ...
I0307 18:50:08.072467 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:50:08.110215 26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
I0307 18:50:08.110244 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:08.139123 26384 logs.go:123] Gathering logs for kube-apiserver [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5] ...
I0307 18:50:08.139152 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
I0307 18:50:08.172722 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:50:08.172748 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0307 18:50:22.210905 26384 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (14.038132901s)
W0307 18:50:22.210954 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:50:22.210963 26384 logs.go:123] Gathering logs for etcd [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7] ...
I0307 18:50:22.210973 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
W0307 18:50:22.243161 26384 logs.go:130] failed etcd [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7": Process exited with status 1
stdout:
stderr:
E0307 18:50:22.230070 2359 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7\": not found" containerID="33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
time="2023-03-07T18:50:22Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7\": not found"
output:
** stderr **
E0307 18:50:22.230070 2359 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7\": not found" containerID="33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
time="2023-03-07T18:50:22Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7\": not found"
** /stderr **
I0307 18:50:22.243182 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:50:22.243194 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:22.312610 26384 logs.go:123] Gathering logs for kube-controller-manager [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611] ...
I0307 18:50:22.312647 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
I0307 18:50:22.376483 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:50:22.376512 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:50:22.441347 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:50:22.441379 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:50:24.956249 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:50:24.956843 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:50:25.241295 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:50:25.241366 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:50:25.271038 26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:25.271057 26384 cri.go:87] found id: ""
I0307 18:50:25.271063 26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
I0307 18:50:25.271112 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:25.275131 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:50:25.275189 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:50:25.304102 26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:25.304122 26384 cri.go:87] found id: ""
I0307 18:50:25.304131 26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
I0307 18:50:25.304176 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:25.308112 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:50:25.308165 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:50:25.335593 26384 cri.go:87] found id: ""
I0307 18:50:25.335621 26384 logs.go:277] 0 containers: []
W0307 18:50:25.335631 26384 logs.go:279] No container was found matching "coredns"
I0307 18:50:25.335639 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:50:25.335696 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:50:25.366744 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:25.366765 26384 cri.go:87] found id: ""
I0307 18:50:25.366773 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:50:25.366814 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:25.370479 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:50:25.370523 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:50:25.397628 26384 cri.go:87] found id: ""
I0307 18:50:25.397651 26384 logs.go:277] 0 containers: []
W0307 18:50:25.397657 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:50:25.397662 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:50:25.397703 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:50:25.424370 26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:25.424388 26384 cri.go:87] found id: "1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
I0307 18:50:25.424392 26384 cri.go:87] found id: ""
I0307 18:50:25.424399 26384 logs.go:277] 2 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611]
I0307 18:50:25.424438 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:25.428375 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:25.432135 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:50:25.432197 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:50:25.464666 26384 cri.go:87] found id: ""
I0307 18:50:25.464686 26384 logs.go:277] 0 containers: []
W0307 18:50:25.464693 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:50:25.464698 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:50:25.464754 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:50:25.495748 26384 cri.go:87] found id: ""
I0307 18:50:25.495771 26384 logs.go:277] 0 containers: []
W0307 18:50:25.495778 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:50:25.495798 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:50:25.495816 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:50:25.552387 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:50:25.552409 26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
I0307 18:50:25.552419 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:25.585072 26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
I0307 18:50:25.585100 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:25.612624 26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
I0307 18:50:25.612652 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:25.642351 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:50:25.642375 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:50:25.696054 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:50:25.696080 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:50:25.759230 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:50:25.759261 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:50:25.771377 26384 logs.go:123] Gathering logs for container status ...
I0307 18:50:25.771400 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:50:25.814932 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:50:25.814958 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:25.880431 26384 logs.go:123] Gathering logs for kube-controller-manager [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611] ...
I0307 18:50:25.880462 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
I0307 18:50:28.429316 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:50:28.430023 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:50:28.740900 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:50:28.740981 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:50:28.771490 26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:28.771510 26384 cri.go:87] found id: ""
I0307 18:50:28.771517 26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
I0307 18:50:28.771573 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:28.775481 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:50:28.775544 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:50:28.803618 26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:28.803637 26384 cri.go:87] found id: ""
I0307 18:50:28.803644 26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
I0307 18:50:28.803682 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:28.807610 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:50:28.807656 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:50:28.837030 26384 cri.go:87] found id: ""
I0307 18:50:28.837048 26384 logs.go:277] 0 containers: []
W0307 18:50:28.837053 26384 logs.go:279] No container was found matching "coredns"
I0307 18:50:28.837058 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:50:28.837105 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:50:28.868318 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:28.868344 26384 cri.go:87] found id: ""
I0307 18:50:28.868353 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:50:28.868412 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:28.872041 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:50:28.872096 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:50:28.900155 26384 cri.go:87] found id: ""
I0307 18:50:28.900186 26384 logs.go:277] 0 containers: []
W0307 18:50:28.900195 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:50:28.900206 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:50:28.900266 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:50:28.928973 26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:28.929007 26384 cri.go:87] found id: "1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
I0307 18:50:28.929014 26384 cri.go:87] found id: ""
I0307 18:50:28.929022 26384 logs.go:277] 2 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611]
I0307 18:50:28.929080 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:28.932963 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:28.936674 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:50:28.936728 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:50:28.965932 26384 cri.go:87] found id: ""
I0307 18:50:28.965955 26384 logs.go:277] 0 containers: []
W0307 18:50:28.965965 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:50:28.965972 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:50:28.966027 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:50:28.996172 26384 cri.go:87] found id: ""
I0307 18:50:28.996202 26384 logs.go:277] 0 containers: []
W0307 18:50:28.996213 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:50:28.996230 26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
I0307 18:50:28.996252 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:29.027476 26384 logs.go:123] Gathering logs for kube-controller-manager [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611] ...
I0307 18:50:29.027505 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
I0307 18:50:29.068982 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:50:29.069007 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:50:29.123121 26384 logs.go:123] Gathering logs for container status ...
I0307 18:50:29.123155 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:50:29.154965 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:50:29.154990 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:50:29.221021 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:50:29.221051 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:50:29.275777 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:50:29.275800 26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
I0307 18:50:29.275817 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:29.305802 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:50:29.305836 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:29.374935 26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
I0307 18:50:29.374971 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:29.404375 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:50:29.404401 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:50:31.916470 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:50:31.917095 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:50:32.241577 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:50:32.241647 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:50:32.273069 26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:32.273102 26384 cri.go:87] found id: ""
I0307 18:50:32.273108 26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
I0307 18:50:32.273164 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:32.277800 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:50:32.277842 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:50:32.312694 26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:32.312722 26384 cri.go:87] found id: ""
I0307 18:50:32.312732 26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
I0307 18:50:32.312778 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:32.316764 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:50:32.316809 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:50:32.348032 26384 cri.go:87] found id: ""
I0307 18:50:32.348049 26384 logs.go:277] 0 containers: []
W0307 18:50:32.348054 26384 logs.go:279] No container was found matching "coredns"
I0307 18:50:32.348059 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:50:32.348116 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:50:32.382261 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:32.382286 26384 cri.go:87] found id: ""
I0307 18:50:32.382297 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:50:32.382355 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:32.386519 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:50:32.386583 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:50:32.423869 26384 cri.go:87] found id: ""
I0307 18:50:32.423890 26384 logs.go:277] 0 containers: []
W0307 18:50:32.423897 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:50:32.423902 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:50:32.423964 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:50:32.461514 26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:32.461538 26384 cri.go:87] found id: "1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
I0307 18:50:32.461545 26384 cri.go:87] found id: ""
I0307 18:50:32.461553 26384 logs.go:277] 2 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611]
I0307 18:50:32.461606 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:32.465604 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:32.469437 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:50:32.469474 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:50:32.507355 26384 cri.go:87] found id: ""
I0307 18:50:32.507376 26384 logs.go:277] 0 containers: []
W0307 18:50:32.507388 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:50:32.507395 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:50:32.507451 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:50:32.545202 26384 cri.go:87] found id: ""
I0307 18:50:32.545230 26384 logs.go:277] 0 containers: []
W0307 18:50:32.545240 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:50:32.545257 26384 logs.go:123] Gathering logs for kube-controller-manager [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611] ...
I0307 18:50:32.545270 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
I0307 18:50:32.598969 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:50:32.598996 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:50:32.666940 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:50:32.666972 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:50:32.724486 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:50:32.724506 26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
I0307 18:50:32.724516 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:32.758363 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:50:32.758389 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:32.838189 26384 logs.go:123] Gathering logs for container status ...
I0307 18:50:32.838228 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:50:32.891708 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:50:32.891740 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:50:32.903720 26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
I0307 18:50:32.903746 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:32.936722 26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
I0307 18:50:32.936745 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:32.969027 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:50:32.969055 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:50:35.524418 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:50:35.525031 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:50:35.741445 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:50:35.741534 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:50:35.771644 26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:35.771665 26384 cri.go:87] found id: ""
I0307 18:50:35.771673 26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
I0307 18:50:35.771733 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:35.775944 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:50:35.776002 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:50:35.807438 26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:35.807455 26384 cri.go:87] found id: ""
I0307 18:50:35.807464 26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
I0307 18:50:35.807512 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:35.811521 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:50:35.811577 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:50:35.839719 26384 cri.go:87] found id: ""
I0307 18:50:35.839739 26384 logs.go:277] 0 containers: []
W0307 18:50:35.839746 26384 logs.go:279] No container was found matching "coredns"
I0307 18:50:35.839751 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:50:35.839801 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:50:35.870068 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:35.870089 26384 cri.go:87] found id: ""
I0307 18:50:35.870096 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:50:35.870139 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:35.873953 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:50:35.874009 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:50:35.907548 26384 cri.go:87] found id: ""
I0307 18:50:35.907576 26384 logs.go:277] 0 containers: []
W0307 18:50:35.907584 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:50:35.907589 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:50:35.907648 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:50:35.938809 26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:35.938828 26384 cri.go:87] found id: ""
I0307 18:50:35.938834 26384 logs.go:277] 1 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
I0307 18:50:35.938888 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:35.943995 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:50:35.944045 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:50:35.971387 26384 cri.go:87] found id: ""
I0307 18:50:35.971406 26384 logs.go:277] 0 containers: []
W0307 18:50:35.971413 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:50:35.971420 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:50:35.971470 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:50:35.998911 26384 cri.go:87] found id: ""
I0307 18:50:35.998938 26384 logs.go:277] 0 containers: []
W0307 18:50:35.998965 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:50:35.998982 26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
I0307 18:50:35.999012 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:36.038815 26384 logs.go:123] Gathering logs for container status ...
I0307 18:50:36.038848 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:50:36.077044 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:50:36.077071 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:50:36.129558 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:50:36.129591 26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
I0307 18:50:36.129604 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:36.166935 26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
I0307 18:50:36.166960 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:36.195852 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:50:36.195882 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:36.271088 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:50:36.271123 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:50:36.326628 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:50:36.326662 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:50:36.389379 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:50:36.389411 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:50:38.901954 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:50:38.902491 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:50:39.240923 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:50:39.241009 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:50:39.271083 26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:39.271107 26384 cri.go:87] found id: ""
I0307 18:50:39.271116 26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
I0307 18:50:39.271171 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:39.275511 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:50:39.275567 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:50:39.306601 26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:39.306618 26384 cri.go:87] found id: ""
I0307 18:50:39.306625 26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
I0307 18:50:39.306672 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:39.311169 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:50:39.311223 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:50:39.341921 26384 cri.go:87] found id: ""
I0307 18:50:39.341940 26384 logs.go:277] 0 containers: []
W0307 18:50:39.341945 26384 logs.go:279] No container was found matching "coredns"
I0307 18:50:39.341951 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:50:39.342005 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:50:39.370475 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:39.370499 26384 cri.go:87] found id: ""
I0307 18:50:39.370509 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:50:39.370560 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:39.374423 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:50:39.374480 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:50:39.404780 26384 cri.go:87] found id: ""
I0307 18:50:39.404801 26384 logs.go:277] 0 containers: []
W0307 18:50:39.404809 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:50:39.404819 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:50:39.404877 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:50:39.435660 26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:39.435684 26384 cri.go:87] found id: ""
I0307 18:50:39.435692 26384 logs.go:277] 1 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
I0307 18:50:39.435746 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:39.439799 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:50:39.439857 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:50:39.468225 26384 cri.go:87] found id: ""
I0307 18:50:39.468250 26384 logs.go:277] 0 containers: []
W0307 18:50:39.468259 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:50:39.468267 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:50:39.468325 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:50:39.500922 26384 cri.go:87] found id: ""
I0307 18:50:39.500949 26384 logs.go:277] 0 containers: []
W0307 18:50:39.500958 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:50:39.500982 26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
I0307 18:50:39.500995 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:39.530882 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:50:39.530921 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:39.600657 26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
I0307 18:50:39.600685 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:39.649285 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:50:39.649317 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:50:39.697957 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:50:39.697989 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:50:39.759513 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:50:39.759544 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:50:39.772345 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:50:39.772373 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:50:39.831389 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:50:39.831411 26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
I0307 18:50:39.831421 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:39.864274 26384 logs.go:123] Gathering logs for container status ...
I0307 18:50:39.864314 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:50:42.400891 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:50:42.401466 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:50:42.740872 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:50:42.740939 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:50:42.768431 26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:42.768453 26384 cri.go:87] found id: ""
I0307 18:50:42.768460 26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
I0307 18:50:42.768513 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:42.772288 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:50:42.772331 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:50:42.798526 26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:42.798553 26384 cri.go:87] found id: ""
I0307 18:50:42.798562 26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
I0307 18:50:42.798603 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:42.802234 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:50:42.802282 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:50:42.828743 26384 cri.go:87] found id: ""
I0307 18:50:42.828762 26384 logs.go:277] 0 containers: []
W0307 18:50:42.828769 26384 logs.go:279] No container was found matching "coredns"
I0307 18:50:42.828774 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:50:42.828825 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:50:42.856471 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:42.856494 26384 cri.go:87] found id: ""
I0307 18:50:42.856501 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:50:42.856546 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:42.860506 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:50:42.860571 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:50:42.886392 26384 cri.go:87] found id: ""
I0307 18:50:42.886416 26384 logs.go:277] 0 containers: []
W0307 18:50:42.886423 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:50:42.886428 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:50:42.886474 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:50:42.913452 26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:42.913478 26384 cri.go:87] found id: ""
I0307 18:50:42.913487 26384 logs.go:277] 1 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
I0307 18:50:42.913532 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:42.917323 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:50:42.917383 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:50:42.943946 26384 cri.go:87] found id: ""
I0307 18:50:42.943964 26384 logs.go:277] 0 containers: []
W0307 18:50:42.943970 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:50:42.943975 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:50:42.944025 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:50:42.969863 26384 cri.go:87] found id: ""
I0307 18:50:42.969888 26384 logs.go:277] 0 containers: []
W0307 18:50:42.969896 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:50:42.969927 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:50:42.969944 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:50:43.027701 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:50:43.027737 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:50:43.041018 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:50:43.041051 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:50:43.090630 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:50:43.090658 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:50:43.090670 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:43.162692 26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
I0307 18:50:43.162728 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:43.208000 26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
I0307 18:50:43.208025 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:43.241826 26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
I0307 18:50:43.241853 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:43.272472 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:50:43.272497 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:50:43.323281 26384 logs.go:123] Gathering logs for container status ...
I0307 18:50:43.323311 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:50:45.854952 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:50:45.855553 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:50:46.241035 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:50:46.241121 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:50:46.274554 26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:46.274576 26384 cri.go:87] found id: ""
I0307 18:50:46.274583 26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
I0307 18:50:46.274637 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:46.278942 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:50:46.278994 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:50:46.307295 26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:46.307313 26384 cri.go:87] found id: ""
I0307 18:50:46.307320 26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
I0307 18:50:46.307363 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:46.311114 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:50:46.311163 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:50:46.341762 26384 cri.go:87] found id: ""
I0307 18:50:46.341780 26384 logs.go:277] 0 containers: []
W0307 18:50:46.341787 26384 logs.go:279] No container was found matching "coredns"
I0307 18:50:46.341792 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:50:46.341852 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:50:46.374164 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:46.374187 26384 cri.go:87] found id: ""
I0307 18:50:46.374196 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:50:46.374252 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:46.378131 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:50:46.378201 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:50:46.406158 26384 cri.go:87] found id: ""
I0307 18:50:46.406176 26384 logs.go:277] 0 containers: []
W0307 18:50:46.406182 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:50:46.406188 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:50:46.406230 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:50:46.434896 26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:46.434922 26384 cri.go:87] found id: ""
I0307 18:50:46.434931 26384 logs.go:277] 1 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
I0307 18:50:46.434985 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:46.438785 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:50:46.438842 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:50:46.469078 26384 cri.go:87] found id: ""
I0307 18:50:46.469100 26384 logs.go:277] 0 containers: []
W0307 18:50:46.469107 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:50:46.469113 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:50:46.469178 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:50:46.500068 26384 cri.go:87] found id: ""
I0307 18:50:46.500096 26384 logs.go:277] 0 containers: []
W0307 18:50:46.500105 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:50:46.500117 26384 logs.go:123] Gathering logs for container status ...
I0307 18:50:46.500128 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:50:46.537674 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:50:46.537702 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:50:46.599647 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:50:46.599677 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:50:46.611626 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:50:46.611656 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:50:46.664489 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:50:46.664513 26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
I0307 18:50:46.664526 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:46.698473 26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
I0307 18:50:46.698501 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:46.730118 26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
I0307 18:50:46.730147 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:46.777380 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:50:46.777407 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:50:46.827387 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:50:46.827416 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:49.400363 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:50:49.400915 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:50:49.741647 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:50:49.741733 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:50:49.774027 26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:49.774056 26384 cri.go:87] found id: ""
I0307 18:50:49.774065 26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
I0307 18:50:49.774123 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:49.778228 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:50:49.778286 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:50:49.807806 26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:49.807832 26384 cri.go:87] found id: ""
I0307 18:50:49.807841 26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
I0307 18:50:49.807884 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:49.811537 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:50:49.811584 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:50:49.839443 26384 cri.go:87] found id: ""
I0307 18:50:49.839468 26384 logs.go:277] 0 containers: []
W0307 18:50:49.839477 26384 logs.go:279] No container was found matching "coredns"
I0307 18:50:49.839485 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:50:49.839543 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:50:49.868206 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:49.868225 26384 cri.go:87] found id: ""
I0307 18:50:49.868232 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:50:49.868273 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:49.871988 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:50:49.872029 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:50:49.903763 26384 cri.go:87] found id: ""
I0307 18:50:49.903790 26384 logs.go:277] 0 containers: []
W0307 18:50:49.903802 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:50:49.903809 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:50:49.903869 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:50:49.931386 26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:49.931408 26384 cri.go:87] found id: ""
I0307 18:50:49.931417 26384 logs.go:277] 1 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
I0307 18:50:49.931470 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:49.935416 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:50:49.935472 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:50:49.964413 26384 cri.go:87] found id: ""
I0307 18:50:49.964442 26384 logs.go:277] 0 containers: []
W0307 18:50:49.964451 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:50:49.964457 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:50:49.964519 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:50:49.995371 26384 cri.go:87] found id: ""
I0307 18:50:49.995400 26384 logs.go:277] 0 containers: []
W0307 18:50:49.995410 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:50:49.995428 26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
I0307 18:50:49.995443 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:50.027383 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:50:50.027415 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:50.102948 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:50:50.102987 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:50:50.153563 26384 logs.go:123] Gathering logs for container status ...
I0307 18:50:50.153595 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:50:50.187209 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:50:50.187240 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:50:50.252908 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:50:50.252940 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:50:50.265236 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:50:50.265260 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:50:50.319484 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:50:50.319506 26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
I0307 18:50:50.319518 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:50.349093 26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
I0307 18:50:50.349119 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:52.888932 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:50:52.889665 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:50:53.241383 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:50:53.241454 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:50:53.270824 26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:53.270844 26384 cri.go:87] found id: ""
I0307 18:50:53.270851 26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
I0307 18:50:53.270903 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:53.274602 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:50:53.274642 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:50:53.307455 26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:53.307483 26384 cri.go:87] found id: ""
I0307 18:50:53.307492 26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
I0307 18:50:53.307545 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:53.311591 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:50:53.311651 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:50:53.339718 26384 cri.go:87] found id: ""
I0307 18:50:53.339742 26384 logs.go:277] 0 containers: []
W0307 18:50:53.339751 26384 logs.go:279] No container was found matching "coredns"
I0307 18:50:53.339758 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:50:53.339811 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:50:53.369697 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:53.369729 26384 cri.go:87] found id: ""
I0307 18:50:53.369739 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:50:53.369781 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:53.373719 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:50:53.373782 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:50:53.401736 26384 cri.go:87] found id: ""
I0307 18:50:53.401754 26384 logs.go:277] 0 containers: []
W0307 18:50:53.401760 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:50:53.401764 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:50:53.401823 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:50:53.432212 26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:53.432236 26384 cri.go:87] found id: ""
I0307 18:50:53.432244 26384 logs.go:277] 1 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
I0307 18:50:53.432301 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:53.436390 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:50:53.436449 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:50:53.465471 26384 cri.go:87] found id: ""
I0307 18:50:53.465500 26384 logs.go:277] 0 containers: []
W0307 18:50:53.465518 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:50:53.465525 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:50:53.465583 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:50:53.493404 26384 cri.go:87] found id: ""
I0307 18:50:53.493431 26384 logs.go:277] 0 containers: []
W0307 18:50:53.493440 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:50:53.493455 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:50:53.493468 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:50:53.556791 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:50:53.556823 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:50:53.568973 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:50:53.568992 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:50:53.621325 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:50:53.621345 26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
I0307 18:50:53.621356 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:53.662717 26384 logs.go:123] Gathering logs for container status ...
I0307 18:50:53.662744 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:50:53.693831 26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
I0307 18:50:53.693855 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:53.731078 26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
I0307 18:50:53.731104 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:53.759392 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:50:53.759416 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:53.827438 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:50:53.827472 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:50:56.380799 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:50:56.381488 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:50:56.740948 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:50:56.741023 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:50:56.777942 26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:56.777966 26384 cri.go:87] found id: ""
I0307 18:50:56.777977 26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
I0307 18:50:56.778023 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:56.782180 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:50:56.782230 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:50:56.810835 26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:56.810861 26384 cri.go:87] found id: ""
I0307 18:50:56.810870 26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
I0307 18:50:56.810916 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:56.814853 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:50:56.814919 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:50:56.842426 26384 cri.go:87] found id: ""
I0307 18:50:56.842451 26384 logs.go:277] 0 containers: []
W0307 18:50:56.842459 26384 logs.go:279] No container was found matching "coredns"
I0307 18:50:56.842465 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:50:56.842517 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:50:56.877177 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:56.877204 26384 cri.go:87] found id: ""
I0307 18:50:56.877212 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:50:56.877269 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:56.881405 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:50:56.881477 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:50:56.913559 26384 cri.go:87] found id: ""
I0307 18:50:56.913584 26384 logs.go:277] 0 containers: []
W0307 18:50:56.913594 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:50:56.913602 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:50:56.913659 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:50:56.941955 26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:56.941979 26384 cri.go:87] found id: ""
I0307 18:50:56.941987 26384 logs.go:277] 1 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
I0307 18:50:56.942045 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:56.946194 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:50:56.946260 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:50:56.978326 26384 cri.go:87] found id: ""
I0307 18:50:56.978349 26384 logs.go:277] 0 containers: []
W0307 18:50:56.978355 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:50:56.978361 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:50:56.978420 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:50:57.007950 26384 cri.go:87] found id: ""
I0307 18:50:57.007973 26384 logs.go:277] 0 containers: []
W0307 18:50:57.007979 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:50:57.007990 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:50:57.008004 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:57.079815 26384 logs.go:123] Gathering logs for container status ...
I0307 18:50:57.079853 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:50:57.120095 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:50:57.120125 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:50:57.180846 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:50:57.180881 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:50:57.193148 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:50:57.193171 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:50:57.246199 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:50:57.246224 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:50:57.246238 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:50:57.299491 26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
I0307 18:50:57.299528 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:57.335019 26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
I0307 18:50:57.335052 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:57.363632 26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
I0307 18:50:57.363662 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:59.901204 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:50:59.901827 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:51:00.241273 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:51:00.241359 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:51:00.271191 26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:51:00.271210 26384 cri.go:87] found id: ""
I0307 18:51:00.271217 26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
I0307 18:51:00.271260 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:00.276060 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:51:00.276095 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:51:00.313616 26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:51:00.313635 26384 cri.go:87] found id: ""
I0307 18:51:00.313642 26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
I0307 18:51:00.313691 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:00.317695 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:51:00.317746 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:51:00.354185 26384 cri.go:87] found id: ""
I0307 18:51:00.354202 26384 logs.go:277] 0 containers: []
W0307 18:51:00.354210 26384 logs.go:279] No container was found matching "coredns"
I0307 18:51:00.354217 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:51:00.354272 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:51:00.388615 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:00.388637 26384 cri.go:87] found id: ""
I0307 18:51:00.388646 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:51:00.388708 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:00.392706 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:51:00.392764 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:51:00.419909 26384 cri.go:87] found id: ""
I0307 18:51:00.419930 26384 logs.go:277] 0 containers: []
W0307 18:51:00.419937 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:51:00.419942 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:51:00.419989 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:51:00.448896 26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:51:00.448921 26384 cri.go:87] found id: ""
I0307 18:51:00.448929 26384 logs.go:277] 1 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
I0307 18:51:00.448982 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:00.452787 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:51:00.452848 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:51:00.482963 26384 cri.go:87] found id: ""
I0307 18:51:00.482983 26384 logs.go:277] 0 containers: []
W0307 18:51:00.482989 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:51:00.482994 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:51:00.483049 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:51:00.510864 26384 cri.go:87] found id: ""
I0307 18:51:00.510894 26384 logs.go:277] 0 containers: []
W0307 18:51:00.510905 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:51:00.510922 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:51:00.510938 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:00.584622 26384 logs.go:123] Gathering logs for container status ...
I0307 18:51:00.584656 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:51:00.620966 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:51:00.620997 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:51:00.633989 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:51:00.634015 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:51:00.685115 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:51:00.685136 26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
I0307 18:51:00.685145 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:51:00.722939 26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
I0307 18:51:00.722971 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:51:00.751368 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:51:00.751399 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:51:00.814202 26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
I0307 18:51:00.814234 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:51:00.855965 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:51:00.855990 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:51:03.406623 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:51:03.407166 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:51:03.740702 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:51:03.740777 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:51:03.774539 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:03.774560 26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:51:03.774567 26384 cri.go:87] found id: ""
I0307 18:51:03.774575 26384 logs.go:277] 2 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
I0307 18:51:03.774639 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:03.778696 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:03.782771 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:51:03.782817 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:51:03.818150 26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:51:03.818173 26384 cri.go:87] found id: ""
I0307 18:51:03.818182 26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
I0307 18:51:03.818226 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:03.822385 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:51:03.822442 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:51:03.855669 26384 cri.go:87] found id: ""
I0307 18:51:03.855697 26384 logs.go:277] 0 containers: []
W0307 18:51:03.855706 26384 logs.go:279] No container was found matching "coredns"
I0307 18:51:03.855713 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:51:03.855765 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:51:03.888270 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:03.888297 26384 cri.go:87] found id: ""
I0307 18:51:03.888304 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:51:03.888346 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:03.892269 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:51:03.892332 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:51:03.920187 26384 cri.go:87] found id: ""
I0307 18:51:03.920221 26384 logs.go:277] 0 containers: []
W0307 18:51:03.920232 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:51:03.920239 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:51:03.920296 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:51:03.953587 26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:51:03.953613 26384 cri.go:87] found id: ""
I0307 18:51:03.953620 26384 logs.go:277] 1 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
I0307 18:51:03.953664 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:03.957799 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:51:03.957864 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:51:03.990134 26384 cri.go:87] found id: ""
I0307 18:51:03.990163 26384 logs.go:277] 0 containers: []
W0307 18:51:03.990173 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:51:03.990180 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:51:03.990252 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:51:04.027162 26384 cri.go:87] found id: ""
I0307 18:51:04.027193 26384 logs.go:277] 0 containers: []
W0307 18:51:04.027203 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:51:04.027222 26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
I0307 18:51:04.027242 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:51:04.067517 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:51:04.067549 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:04.149401 26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
I0307 18:51:04.149431 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:51:04.193745 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:51:04.193773 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:51:04.255156 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:51:04.255194 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:51:04.273611 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:51:04.273640 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0307 18:51:25.368122 26384 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (21.094454524s)
W0307 18:51:25.368169 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:51:25.368184 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:51:25.368198 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:25.400867 26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
I0307 18:51:25.400894 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:51:25.431796 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:51:25.431828 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:51:25.487683 26384 logs.go:123] Gathering logs for container status ...
I0307 18:51:25.487715 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:51:28.026074 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:51:28.026610 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:51:28.241444 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:51:28.241526 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:51:28.274761 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:28.274787 26384 cri.go:87] found id: ""
I0307 18:51:28.274794 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:51:28.274855 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:28.279831 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:51:28.279890 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:51:28.313516 26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:51:28.313534 26384 cri.go:87] found id: ""
I0307 18:51:28.313546 26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
I0307 18:51:28.313588 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:28.317666 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:51:28.317719 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:51:28.347101 26384 cri.go:87] found id: ""
I0307 18:51:28.347124 26384 logs.go:277] 0 containers: []
W0307 18:51:28.347131 26384 logs.go:279] No container was found matching "coredns"
I0307 18:51:28.347136 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:51:28.347198 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:51:28.378300 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:28.378320 26384 cri.go:87] found id: ""
I0307 18:51:28.378326 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:51:28.378377 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:28.382695 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:51:28.382753 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:51:28.410959 26384 cri.go:87] found id: ""
I0307 18:51:28.410981 26384 logs.go:277] 0 containers: []
W0307 18:51:28.410988 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:51:28.410995 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:51:28.411048 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:51:28.441806 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:51:28.441826 26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:51:28.441833 26384 cri.go:87] found id: ""
I0307 18:51:28.441842 26384 logs.go:277] 2 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
I0307 18:51:28.441892 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:28.446211 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:28.450221 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:51:28.450282 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:51:28.483257 26384 cri.go:87] found id: ""
I0307 18:51:28.483279 26384 logs.go:277] 0 containers: []
W0307 18:51:28.483286 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:51:28.483292 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:51:28.483358 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:51:28.510972 26384 cri.go:87] found id: ""
I0307 18:51:28.510998 26384 logs.go:277] 0 containers: []
W0307 18:51:28.511008 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:51:28.511026 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:51:28.511044 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:51:28.524745 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:51:28.524776 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:51:28.578288 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:51:28.578311 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:51:28.578323 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:28.611345 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:51:28.611382 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:28.683142 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:51:28.683180 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:51:28.713237 26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
I0307 18:51:28.713266 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:51:28.751528 26384 logs.go:123] Gathering logs for container status ...
I0307 18:51:28.751554 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:51:28.789824 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:51:28.789849 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:51:28.849258 26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
I0307 18:51:28.849288 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:51:28.881741 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:51:28.881766 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:51:31.435018 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:51:31.435708 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:51:31.741199 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:51:31.741275 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:51:31.775567 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:31.775595 26384 cri.go:87] found id: ""
I0307 18:51:31.775603 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:51:31.775660 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:31.779786 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:51:31.779843 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:51:31.811197 26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:51:31.811217 26384 cri.go:87] found id: ""
I0307 18:51:31.811225 26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
I0307 18:51:31.811279 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:31.815320 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:51:31.815380 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:51:31.844870 26384 cri.go:87] found id: ""
I0307 18:51:31.844898 26384 logs.go:277] 0 containers: []
W0307 18:51:31.844907 26384 logs.go:279] No container was found matching "coredns"
I0307 18:51:31.844915 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:51:31.844992 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:51:31.872742 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:31.872765 26384 cri.go:87] found id: ""
I0307 18:51:31.872779 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:51:31.872834 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:31.876867 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:51:31.876935 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:51:31.903271 26384 cri.go:87] found id: ""
I0307 18:51:31.903299 26384 logs.go:277] 0 containers: []
W0307 18:51:31.903306 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:51:31.903311 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:51:31.903361 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:51:31.930122 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:51:31.930143 26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:51:31.930147 26384 cri.go:87] found id: ""
I0307 18:51:31.930153 26384 logs.go:277] 2 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
I0307 18:51:31.930194 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:31.933837 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:31.937392 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:51:31.937451 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:51:31.963795 26384 cri.go:87] found id: ""
I0307 18:51:31.963818 26384 logs.go:277] 0 containers: []
W0307 18:51:31.963824 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:51:31.963830 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:51:31.963871 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:51:31.997078 26384 cri.go:87] found id: ""
I0307 18:51:31.997101 26384 logs.go:277] 0 containers: []
W0307 18:51:31.997107 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:51:31.997119 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:51:31.997133 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:32.085403 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:51:32.085436 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:51:32.115532 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:51:32.115557 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:51:32.171653 26384 logs.go:123] Gathering logs for container status ...
I0307 18:51:32.171688 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:51:32.204332 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:51:32.204361 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:51:32.216172 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:51:32.216197 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:51:32.266551 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:51:32.266575 26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
I0307 18:51:32.266593 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:51:32.297132 26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
I0307 18:51:32.297159 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:51:32.344077 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:51:32.344105 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:51:32.403948 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:51:32.403977 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:34.935152 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:51:34.935872 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:51:35.241335 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:51:35.241407 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:51:35.270388 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:35.270412 26384 cri.go:87] found id: ""
I0307 18:51:35.270418 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:51:35.270468 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:35.275051 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:51:35.275114 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:51:35.304925 26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:51:35.304971 26384 cri.go:87] found id: ""
I0307 18:51:35.304979 26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
I0307 18:51:35.305030 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:35.308987 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:51:35.309043 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:51:35.334992 26384 cri.go:87] found id: ""
I0307 18:51:35.335015 26384 logs.go:277] 0 containers: []
W0307 18:51:35.335024 26384 logs.go:279] No container was found matching "coredns"
I0307 18:51:35.335031 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:51:35.335078 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:51:35.363029 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:35.363054 26384 cri.go:87] found id: ""
I0307 18:51:35.363062 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:51:35.363112 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:35.366976 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:51:35.367027 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:51:35.393011 26384 cri.go:87] found id: ""
I0307 18:51:35.393033 26384 logs.go:277] 0 containers: []
W0307 18:51:35.393040 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:51:35.393046 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:51:35.393089 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:51:35.418706 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:51:35.418731 26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:51:35.418738 26384 cri.go:87] found id: ""
I0307 18:51:35.418746 26384 logs.go:277] 2 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
I0307 18:51:35.418795 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:35.422711 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:35.426344 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:51:35.426404 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:51:35.453517 26384 cri.go:87] found id: ""
I0307 18:51:35.453540 26384 logs.go:277] 0 containers: []
W0307 18:51:35.453547 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:51:35.453552 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:51:35.453600 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:51:35.480473 26384 cri.go:87] found id: ""
I0307 18:51:35.480506 26384 logs.go:277] 0 containers: []
W0307 18:51:35.480535 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:51:35.480557 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:51:35.480572 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:35.514397 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:51:35.514430 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:51:35.553507 26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
I0307 18:51:35.553543 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:51:35.594291 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:51:35.594323 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:51:35.649916 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:51:35.649950 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:51:35.708932 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:51:35.708962 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:51:35.720655 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:51:35.720682 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:51:35.775147 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:51:35.775170 26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
I0307 18:51:35.775185 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:51:35.808353 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:51:35.808378 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:35.888351 26384 logs.go:123] Gathering logs for container status ...
I0307 18:51:35.888387 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:51:38.421085 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:51:38.421679 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:51:38.741179 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:51:38.741264 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:51:38.771512 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:38.771541 26384 cri.go:87] found id: ""
I0307 18:51:38.771552 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:51:38.771608 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:38.775448 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:51:38.775518 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:51:38.803713 26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:51:38.803738 26384 cri.go:87] found id: ""
I0307 18:51:38.803746 26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
I0307 18:51:38.803797 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:38.807432 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:51:38.807485 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:51:38.841539 26384 cri.go:87] found id: ""
I0307 18:51:38.841564 26384 logs.go:277] 0 containers: []
W0307 18:51:38.841572 26384 logs.go:279] No container was found matching "coredns"
I0307 18:51:38.841580 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:51:38.841700 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:51:38.873163 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:38.873189 26384 cri.go:87] found id: ""
I0307 18:51:38.873197 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:51:38.873244 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:38.876827 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:51:38.876887 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:51:38.904500 26384 cri.go:87] found id: ""
I0307 18:51:38.904525 26384 logs.go:277] 0 containers: []
W0307 18:51:38.904535 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:51:38.904541 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:51:38.904605 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:51:38.933684 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:51:38.933703 26384 cri.go:87] found id: ""
I0307 18:51:38.933708 26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
I0307 18:51:38.933753 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:38.937611 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:51:38.937673 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:51:38.967298 26384 cri.go:87] found id: ""
I0307 18:51:38.967317 26384 logs.go:277] 0 containers: []
W0307 18:51:38.967323 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:51:38.967329 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:51:38.967381 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:51:38.994836 26384 cri.go:87] found id: ""
I0307 18:51:38.994857 26384 logs.go:277] 0 containers: []
W0307 18:51:38.994864 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:51:38.994875 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:51:38.994885 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:51:39.013172 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:51:39.013202 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:39.050550 26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
I0307 18:51:39.050577 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:51:39.081654 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:51:39.081686 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:51:39.122178 26384 logs.go:123] Gathering logs for container status ...
I0307 18:51:39.122206 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:51:39.157534 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:51:39.157558 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:51:39.215607 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:51:39.215638 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:51:39.270533 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:51:39.270555 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:51:39.270565 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:39.351014 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:51:39.351046 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:51:41.910810 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:51:41.911444 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:51:42.240866 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:51:42.240934 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:51:42.270659 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:42.270686 26384 cri.go:87] found id: ""
I0307 18:51:42.270693 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:51:42.270744 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:42.274956 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:51:42.275009 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:51:42.302640 26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:51:42.302659 26384 cri.go:87] found id: ""
I0307 18:51:42.302666 26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
I0307 18:51:42.302708 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:42.306628 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:51:42.306683 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:51:42.333725 26384 cri.go:87] found id: ""
I0307 18:51:42.333744 26384 logs.go:277] 0 containers: []
W0307 18:51:42.333750 26384 logs.go:279] No container was found matching "coredns"
I0307 18:51:42.333757 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:51:42.333797 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:51:42.361433 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:42.361455 26384 cri.go:87] found id: ""
I0307 18:51:42.361461 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:51:42.361525 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:42.365419 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:51:42.365475 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:51:42.390359 26384 cri.go:87] found id: ""
I0307 18:51:42.390386 26384 logs.go:277] 0 containers: []
W0307 18:51:42.390394 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:51:42.390400 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:51:42.390466 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:51:42.418877 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:51:42.418900 26384 cri.go:87] found id: ""
I0307 18:51:42.418909 26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
I0307 18:51:42.418961 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:42.422852 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:51:42.422922 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:51:42.449901 26384 cri.go:87] found id: ""
I0307 18:51:42.449937 26384 logs.go:277] 0 containers: []
W0307 18:51:42.449947 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:51:42.449953 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:51:42.450013 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:51:42.478218 26384 cri.go:87] found id: ""
I0307 18:51:42.478243 26384 logs.go:277] 0 containers: []
W0307 18:51:42.478251 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:51:42.478269 26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
I0307 18:51:42.478286 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:51:42.506655 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:51:42.506700 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:42.582409 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:51:42.582444 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:42.615907 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:51:42.615931 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:51:42.657529 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:51:42.657560 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:51:42.712843 26384 logs.go:123] Gathering logs for container status ...
I0307 18:51:42.712871 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:51:42.745993 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:51:42.746017 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:51:42.808149 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:51:42.808182 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:51:42.820414 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:51:42.820435 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:51:42.873183 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:51:45.374057 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:51:45.374585 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:51:45.741047 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:51:45.741134 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:51:45.770908 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:45.770936 26384 cri.go:87] found id: ""
I0307 18:51:45.770944 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:51:45.771001 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:45.775199 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:51:45.775271 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:51:45.804540 26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:51:45.804560 26384 cri.go:87] found id: ""
I0307 18:51:45.804567 26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
I0307 18:51:45.804609 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:45.808609 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:51:45.808686 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:51:45.835602 26384 cri.go:87] found id: ""
I0307 18:51:45.835627 26384 logs.go:277] 0 containers: []
W0307 18:51:45.835635 26384 logs.go:279] No container was found matching "coredns"
I0307 18:51:45.835643 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:51:45.835702 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:51:45.868007 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:45.868029 26384 cri.go:87] found id: ""
I0307 18:51:45.868038 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:51:45.868098 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:45.872229 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:51:45.872288 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:51:45.900275 26384 cri.go:87] found id: ""
I0307 18:51:45.900301 26384 logs.go:277] 0 containers: []
W0307 18:51:45.900310 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:51:45.900317 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:51:45.900380 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:51:45.928163 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:51:45.928182 26384 cri.go:87] found id: ""
I0307 18:51:45.928189 26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
I0307 18:51:45.928248 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:45.932473 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:51:45.932532 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:51:45.961937 26384 cri.go:87] found id: ""
I0307 18:51:45.961971 26384 logs.go:277] 0 containers: []
W0307 18:51:45.961982 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:51:45.961990 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:51:45.962041 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:51:45.991124 26384 cri.go:87] found id: ""
I0307 18:51:45.991158 26384 logs.go:277] 0 containers: []
W0307 18:51:45.991165 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:51:45.991178 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:51:45.991195 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:51:46.055916 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:51:46.055947 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:51:46.069670 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:51:46.069697 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:51:46.123987 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:51:46.124010 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:51:46.124024 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:46.158206 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:51:46.158235 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:46.234157 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:51:46.234188 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:51:46.277028 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:51:46.277054 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:51:46.331295 26384 logs.go:123] Gathering logs for container status ...
I0307 18:51:46.331325 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:51:46.369056 26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
I0307 18:51:46.369081 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:51:48.902692 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:51:48.903509 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:51:49.240949 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:51:49.241016 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:51:49.270709 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:49.270735 26384 cri.go:87] found id: ""
I0307 18:51:49.270744 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:51:49.270804 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:49.274731 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:51:49.274789 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:51:49.302081 26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:51:49.302100 26384 cri.go:87] found id: ""
I0307 18:51:49.302108 26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
I0307 18:51:49.302166 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:49.306174 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:51:49.306234 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:51:49.333438 26384 cri.go:87] found id: ""
I0307 18:51:49.333461 26384 logs.go:277] 0 containers: []
W0307 18:51:49.333468 26384 logs.go:279] No container was found matching "coredns"
I0307 18:51:49.333474 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:51:49.333527 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:51:49.365533 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:49.365562 26384 cri.go:87] found id: ""
I0307 18:51:49.365569 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:51:49.365610 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:49.369216 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:51:49.369276 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:51:49.398301 26384 cri.go:87] found id: ""
I0307 18:51:49.398326 26384 logs.go:277] 0 containers: []
W0307 18:51:49.398334 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:51:49.398341 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:51:49.398398 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:51:49.427703 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:51:49.427722 26384 cri.go:87] found id: ""
I0307 18:51:49.427730 26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
I0307 18:51:49.427774 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:49.431651 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:51:49.431702 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:51:49.462642 26384 cri.go:87] found id: ""
I0307 18:51:49.462667 26384 logs.go:277] 0 containers: []
W0307 18:51:49.462674 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:51:49.462679 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:51:49.462729 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:51:49.489078 26384 cri.go:87] found id: ""
I0307 18:51:49.489106 26384 logs.go:277] 0 containers: []
W0307 18:51:49.489116 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:51:49.489129 26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
I0307 18:51:49.489140 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:51:49.518966 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:51:49.518994 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:51:49.578313 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:51:49.578343 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:51:49.632259 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:51:49.632280 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:51:49.632292 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:49.665772 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:51:49.665797 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:49.745503 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:51:49.745534 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:51:49.785793 26384 logs.go:123] Gathering logs for container status ...
I0307 18:51:49.785819 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:51:49.821781 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:51:49.821843 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:51:49.888865 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:51:49.888906 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:51:52.403328 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:51:52.403890 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:51:52.741393 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:51:52.741477 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:51:52.770492 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:52.770514 26384 cri.go:87] found id: ""
I0307 18:51:52.770520 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:51:52.770575 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:52.774281 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:51:52.774334 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:51:52.804403 26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:51:52.804427 26384 cri.go:87] found id: ""
I0307 18:51:52.804435 26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
I0307 18:51:52.804480 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:52.808178 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:51:52.808226 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:51:52.836026 26384 cri.go:87] found id: ""
I0307 18:51:52.836048 26384 logs.go:277] 0 containers: []
W0307 18:51:52.836055 26384 logs.go:279] No container was found matching "coredns"
I0307 18:51:52.836060 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:51:52.836118 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:51:52.867795 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:52.867824 26384 cri.go:87] found id: ""
I0307 18:51:52.867834 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:51:52.867891 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:52.871532 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:51:52.871602 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:51:52.899536 26384 cri.go:87] found id: ""
I0307 18:51:52.899558 26384 logs.go:277] 0 containers: []
W0307 18:51:52.899565 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:51:52.899570 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:51:52.899631 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:51:52.927081 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:51:52.927105 26384 cri.go:87] found id: ""
I0307 18:51:52.927114 26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
I0307 18:51:52.927170 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:52.930990 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:51:52.931056 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:51:52.961939 26384 cri.go:87] found id: ""
I0307 18:51:52.961965 26384 logs.go:277] 0 containers: []
W0307 18:51:52.961973 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:51:52.961978 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:51:52.962025 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:51:52.990556 26384 cri.go:87] found id: ""
I0307 18:51:52.990582 26384 logs.go:277] 0 containers: []
W0307 18:51:52.990589 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:51:52.990602 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:51:52.990611 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:51:53.055863 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:51:53.055899 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:51:53.118674 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:51:53.118699 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:51:53.118712 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:51:53.160200 26384 logs.go:123] Gathering logs for container status ...
I0307 18:51:53.160226 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:51:53.193132 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:51:53.193157 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:51:53.206488 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:51:53.206521 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:53.239547 26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
I0307 18:51:53.239575 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:51:53.271150 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:51:53.271179 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:53.355907 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:51:53.355937 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:51:55.915778 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:51:55.916343 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:51:56.240741 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:51:56.240815 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:51:56.276584 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:56.276609 26384 cri.go:87] found id: ""
I0307 18:51:56.276616 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:51:56.276662 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:56.280478 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:51:56.280543 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:51:56.310551 26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:51:56.310580 26384 cri.go:87] found id: ""
I0307 18:51:56.310591 26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
I0307 18:51:56.310652 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:56.314325 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:51:56.314380 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:51:56.345523 26384 cri.go:87] found id: ""
I0307 18:51:56.345545 26384 logs.go:277] 0 containers: []
W0307 18:51:56.345555 26384 logs.go:279] No container was found matching "coredns"
I0307 18:51:56.345562 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:51:56.345613 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:51:56.374295 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:56.374316 26384 cri.go:87] found id: ""
I0307 18:51:56.374325 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:51:56.374369 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:56.377845 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:51:56.377893 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:51:56.407290 26384 cri.go:87] found id: ""
I0307 18:51:56.407314 26384 logs.go:277] 0 containers: []
W0307 18:51:56.407323 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:51:56.407330 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:51:56.407387 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:51:56.434800 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:51:56.434822 26384 cri.go:87] found id: ""
I0307 18:51:56.434831 26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
I0307 18:51:56.434889 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:56.438706 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:51:56.438771 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:51:56.469291 26384 cri.go:87] found id: ""
I0307 18:51:56.469321 26384 logs.go:277] 0 containers: []
W0307 18:51:56.469331 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:51:56.469338 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:51:56.469400 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:51:56.496682 26384 cri.go:87] found id: ""
I0307 18:51:56.496707 26384 logs.go:277] 0 containers: []
W0307 18:51:56.496716 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:51:56.496731 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:51:56.496749 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:51:56.558292 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:51:56.558324 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:51:56.616546 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:51:56.616566 26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
I0307 18:51:56.616576 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:51:56.645444 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:51:56.645482 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:51:56.690522 26384 logs.go:123] Gathering logs for container status ...
I0307 18:51:56.690549 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:51:56.729452 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:51:56.729480 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:51:56.741227 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:51:56.741250 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:56.774040 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:51:56.774069 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:56.851946 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:51:56.851980 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:51:59.410226 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:51:59.410809 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:51:59.741513 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:51:59.741583 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:51:59.770692 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:59.770715 26384 cri.go:87] found id: ""
I0307 18:51:59.770723 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:51:59.770773 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:59.774597 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:51:59.774652 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:51:59.802266 26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:51:59.802286 26384 cri.go:87] found id: ""
I0307 18:51:59.802293 26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
I0307 18:51:59.802330 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:59.805853 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:51:59.805892 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:51:59.833448 26384 cri.go:87] found id: ""
I0307 18:51:59.833466 26384 logs.go:277] 0 containers: []
W0307 18:51:59.833473 26384 logs.go:279] No container was found matching "coredns"
I0307 18:51:59.833477 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:51:59.833517 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:51:59.864701 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:59.864723 26384 cri.go:87] found id: ""
I0307 18:51:59.864732 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:51:59.864787 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:59.868622 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:51:59.868687 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:51:59.900470 26384 cri.go:87] found id: ""
I0307 18:51:59.900500 26384 logs.go:277] 0 containers: []
W0307 18:51:59.900510 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:51:59.900518 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:51:59.900573 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:51:59.927551 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:51:59.927580 26384 cri.go:87] found id: ""
I0307 18:51:59.927588 26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
I0307 18:51:59.927633 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:59.931339 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:51:59.931393 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:51:59.959403 26384 cri.go:87] found id: ""
I0307 18:51:59.959426 26384 logs.go:277] 0 containers: []
W0307 18:51:59.959436 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:51:59.959442 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:51:59.959484 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:51:59.987595 26384 cri.go:87] found id: ""
I0307 18:51:59.987616 26384 logs.go:277] 0 containers: []
W0307 18:51:59.987623 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:51:59.987637 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:51:59.987654 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:00.035743 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:52:00.035772 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:52:00.099440 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:52:00.099473 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:00.131520 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:52:00.131549 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:00.208993 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:52:00.209030 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:52:00.267588 26384 logs.go:123] Gathering logs for container status ...
I0307 18:52:00.267622 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:52:00.301447 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:52:00.301476 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:52:00.313284 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:52:00.313307 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:52:00.368862 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:52:00.368881 26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
I0307 18:52:00.368892 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:02.901502 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:52:02.902198 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:52:03.240812 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:52:03.240884 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:52:03.271596 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:03.271623 26384 cri.go:87] found id: ""
I0307 18:52:03.271632 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:52:03.271693 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:03.276075 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:52:03.276140 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:52:03.306294 26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:03.306321 26384 cri.go:87] found id: ""
I0307 18:52:03.306329 26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
I0307 18:52:03.306372 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:03.310127 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:52:03.310195 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:52:03.346928 26384 cri.go:87] found id: ""
I0307 18:52:03.346956 26384 logs.go:277] 0 containers: []
W0307 18:52:03.346964 26384 logs.go:279] No container was found matching "coredns"
I0307 18:52:03.346970 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:52:03.347028 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:52:03.373901 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:03.373935 26384 cri.go:87] found id: ""
I0307 18:52:03.373944 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:52:03.374004 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:03.377726 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:52:03.377816 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:52:03.408820 26384 cri.go:87] found id: ""
I0307 18:52:03.408855 26384 logs.go:277] 0 containers: []
W0307 18:52:03.408862 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:52:03.408880 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:52:03.408938 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:52:03.437027 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:03.437049 26384 cri.go:87] found id: ""
I0307 18:52:03.437060 26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
I0307 18:52:03.437104 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:03.440989 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:52:03.441047 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:52:03.470590 26384 cri.go:87] found id: ""
I0307 18:52:03.470614 26384 logs.go:277] 0 containers: []
W0307 18:52:03.470621 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:52:03.470627 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:52:03.470688 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:52:03.500217 26384 cri.go:87] found id: ""
I0307 18:52:03.500244 26384 logs.go:277] 0 containers: []
W0307 18:52:03.500252 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:52:03.500267 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:52:03.500280 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:52:03.566239 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:52:03.566268 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:52:03.625165 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:52:03.625184 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:52:03.625195 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:52:03.682195 26384 logs.go:123] Gathering logs for container status ...
I0307 18:52:03.682226 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:52:03.719700 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:52:03.719727 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:52:03.731216 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:52:03.731240 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:03.763196 26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
I0307 18:52:03.763229 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:03.791661 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:52:03.791686 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:03.868166 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:52:03.868202 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:06.409727 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:52:06.410322 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:52:06.740737 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:52:06.740806 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:52:06.771108 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:06.771137 26384 cri.go:87] found id: ""
I0307 18:52:06.771144 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:52:06.771189 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:06.775193 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:52:06.775250 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:52:06.806716 26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:06.806737 26384 cri.go:87] found id: ""
I0307 18:52:06.806746 26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
I0307 18:52:06.806795 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:06.810459 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:52:06.810504 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:52:06.837774 26384 cri.go:87] found id: ""
I0307 18:52:06.837797 26384 logs.go:277] 0 containers: []
W0307 18:52:06.837804 26384 logs.go:279] No container was found matching "coredns"
I0307 18:52:06.837809 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:52:06.837860 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:52:06.866218 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:06.866239 26384 cri.go:87] found id: ""
I0307 18:52:06.866249 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:52:06.866303 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:06.869982 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:52:06.870039 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:52:06.899518 26384 cri.go:87] found id: ""
I0307 18:52:06.899546 26384 logs.go:277] 0 containers: []
W0307 18:52:06.899556 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:52:06.899562 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:52:06.899617 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:52:06.927743 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:06.927770 26384 cri.go:87] found id: ""
I0307 18:52:06.927778 26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
I0307 18:52:06.927820 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:06.931549 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:52:06.931613 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:52:06.961419 26384 cri.go:87] found id: ""
I0307 18:52:06.961445 26384 logs.go:277] 0 containers: []
W0307 18:52:06.961452 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:52:06.961457 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:52:06.961518 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:52:06.989502 26384 cri.go:87] found id: ""
I0307 18:52:06.989526 26384 logs.go:277] 0 containers: []
W0307 18:52:06.989532 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:52:06.989546 26384 logs.go:123] Gathering logs for container status ...
I0307 18:52:06.989559 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:52:07.025827 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:52:07.025850 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:52:07.086485 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:52:07.086512 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:52:07.098772 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:52:07.098799 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:07.130198 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:52:07.130225 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:07.212261 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:52:07.212293 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:52:07.268115 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:52:07.268148 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:52:07.330511 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:52:07.330537 26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
I0307 18:52:07.330549 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:07.362299 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:52:07.362331 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:09.904436 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:52:09.905035 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:52:10.241493 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:52:10.241591 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:52:10.270226 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:10.270250 26384 cri.go:87] found id: ""
I0307 18:52:10.270259 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:52:10.270316 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:10.274003 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:52:10.274065 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:52:10.301912 26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:10.301935 26384 cri.go:87] found id: ""
I0307 18:52:10.301943 26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
I0307 18:52:10.301995 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:10.305750 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:52:10.305809 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:52:10.333329 26384 cri.go:87] found id: ""
I0307 18:52:10.333347 26384 logs.go:277] 0 containers: []
W0307 18:52:10.333356 26384 logs.go:279] No container was found matching "coredns"
I0307 18:52:10.333364 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:52:10.333415 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:52:10.365807 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:10.365830 26384 cri.go:87] found id: ""
I0307 18:52:10.365837 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:52:10.365876 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:10.369503 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:52:10.369555 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:52:10.402354 26384 cri.go:87] found id: ""
I0307 18:52:10.402382 26384 logs.go:277] 0 containers: []
W0307 18:52:10.402391 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:52:10.402398 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:52:10.402458 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:52:10.431242 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:10.431268 26384 cri.go:87] found id: ""
I0307 18:52:10.431278 26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
I0307 18:52:10.431331 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:10.435085 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:52:10.435150 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:52:10.462020 26384 cri.go:87] found id: ""
I0307 18:52:10.462044 26384 logs.go:277] 0 containers: []
W0307 18:52:10.462053 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:52:10.462059 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:52:10.462117 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:52:10.492729 26384 cri.go:87] found id: ""
I0307 18:52:10.492755 26384 logs.go:277] 0 containers: []
W0307 18:52:10.492761 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:52:10.492776 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:52:10.492788 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:52:10.550753 26384 logs.go:123] Gathering logs for container status ...
I0307 18:52:10.550787 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:52:10.587328 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:52:10.587353 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:52:10.649658 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:52:10.649690 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:10.688111 26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
I0307 18:52:10.688141 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:10.715243 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:52:10.715271 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:10.794097 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:52:10.794129 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:52:10.806313 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:52:10.806337 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:52:10.859925 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:52:10.859948 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:52:10.859957 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:13.412753 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:52:13.413326 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:52:13.740752 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:52:13.740822 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:52:13.769106 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:13.769130 26384 cri.go:87] found id: ""
I0307 18:52:13.769139 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:52:13.769197 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:13.772932 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:52:13.772977 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:52:13.799190 26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:13.799214 26384 cri.go:87] found id: ""
I0307 18:52:13.799224 26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
I0307 18:52:13.799272 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:13.803163 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:52:13.803229 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:52:13.829114 26384 cri.go:87] found id: ""
I0307 18:52:13.829137 26384 logs.go:277] 0 containers: []
W0307 18:52:13.829143 26384 logs.go:279] No container was found matching "coredns"
I0307 18:52:13.829148 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:52:13.829215 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:52:13.860207 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:13.860232 26384 cri.go:87] found id: ""
I0307 18:52:13.860241 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:52:13.860299 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:13.864306 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:52:13.864365 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:52:13.895421 26384 cri.go:87] found id: ""
I0307 18:52:13.895447 26384 logs.go:277] 0 containers: []
W0307 18:52:13.895456 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:52:13.895464 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:52:13.895523 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:52:13.926222 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:13.926245 26384 cri.go:87] found id: ""
I0307 18:52:13.926252 26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
I0307 18:52:13.926301 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:13.930178 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:52:13.930235 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:52:13.954048 26384 cri.go:87] found id: ""
I0307 18:52:13.954067 26384 logs.go:277] 0 containers: []
W0307 18:52:13.954073 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:52:13.954081 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:52:13.954137 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:52:13.982093 26384 cri.go:87] found id: ""
I0307 18:52:13.982112 26384 logs.go:277] 0 containers: []
W0307 18:52:13.982118 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:52:13.982130 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:52:13.982143 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:52:14.038975 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:52:14.038990 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:52:14.039000 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:14.090619 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:52:14.090645 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:52:14.148386 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:52:14.148418 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:52:14.209750 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:52:14.209782 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:52:14.222299 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:52:14.222320 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:14.259738 26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
I0307 18:52:14.259764 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:14.288148 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:52:14.288183 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:14.364866 26384 logs.go:123] Gathering logs for container status ...
I0307 18:52:14.364898 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:52:16.896622 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:52:16.897179 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:52:17.241681 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:52:17.241765 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:52:17.270963 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:17.270985 26384 cri.go:87] found id: ""
I0307 18:52:17.270994 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:52:17.271055 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:17.274819 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:52:17.274879 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:52:17.303431 26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:17.303455 26384 cri.go:87] found id: ""
I0307 18:52:17.303464 26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
I0307 18:52:17.303516 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:17.307271 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:52:17.307316 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:52:17.336969 26384 cri.go:87] found id: ""
I0307 18:52:17.336994 26384 logs.go:277] 0 containers: []
W0307 18:52:17.337002 26384 logs.go:279] No container was found matching "coredns"
I0307 18:52:17.337009 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:52:17.337061 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:52:17.364451 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:17.364476 26384 cri.go:87] found id: ""
I0307 18:52:17.364484 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:52:17.364543 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:17.368076 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:52:17.368130 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:52:17.395637 26384 cri.go:87] found id: ""
I0307 18:52:17.395660 26384 logs.go:277] 0 containers: []
W0307 18:52:17.395667 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:52:17.395672 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:52:17.395715 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:52:17.423253 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:17.423273 26384 cri.go:87] found id: ""
I0307 18:52:17.423279 26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
I0307 18:52:17.423321 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:17.427005 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:52:17.427060 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:52:17.454713 26384 cri.go:87] found id: ""
I0307 18:52:17.454731 26384 logs.go:277] 0 containers: []
W0307 18:52:17.454736 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:52:17.454742 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:52:17.454784 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:52:17.486176 26384 cri.go:87] found id: ""
I0307 18:52:17.486199 26384 logs.go:277] 0 containers: []
W0307 18:52:17.486206 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:52:17.486219 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:52:17.486229 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:52:17.498032 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:52:17.498055 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:52:17.557073 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:52:17.557097 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:52:17.557110 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:17.594388 26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
I0307 18:52:17.594418 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:17.620305 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:52:17.620338 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:17.702872 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:52:17.702904 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:52:17.759889 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:52:17.759926 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:52:17.817947 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:52:17.817980 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:17.865944 26384 logs.go:123] Gathering logs for container status ...
I0307 18:52:17.865973 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:52:20.398731 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:52:20.399378 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:52:20.740808 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:52:20.740889 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:52:20.774030 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:20.774056 26384 cri.go:87] found id: ""
I0307 18:52:20.774066 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:52:20.774117 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:20.778074 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:52:20.778136 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:52:20.806773 26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:20.806791 26384 cri.go:87] found id: ""
I0307 18:52:20.806798 26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
I0307 18:52:20.806846 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:20.810652 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:52:20.810700 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:52:20.838994 26384 cri.go:87] found id: ""
I0307 18:52:20.839019 26384 logs.go:277] 0 containers: []
W0307 18:52:20.839029 26384 logs.go:279] No container was found matching "coredns"
I0307 18:52:20.839042 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:52:20.839102 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:52:20.869727 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:20.869748 26384 cri.go:87] found id: ""
I0307 18:52:20.869756 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:52:20.869812 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:20.873736 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:52:20.873793 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:52:20.901823 26384 cri.go:87] found id: ""
I0307 18:52:20.901844 26384 logs.go:277] 0 containers: []
W0307 18:52:20.901851 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:52:20.901857 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:52:20.901929 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:52:20.934273 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:20.934298 26384 cri.go:87] found id: ""
I0307 18:52:20.934306 26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
I0307 18:52:20.934356 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:20.938406 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:52:20.938472 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:52:20.969450 26384 cri.go:87] found id: ""
I0307 18:52:20.969479 26384 logs.go:277] 0 containers: []
W0307 18:52:20.969486 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:52:20.969492 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:52:20.969541 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:52:21.001492 26384 cri.go:87] found id: ""
I0307 18:52:21.001514 26384 logs.go:277] 0 containers: []
W0307 18:52:21.001521 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:52:21.001534 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:52:21.001548 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:52:21.054970 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:52:21.054986 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:52:21.054995 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:21.088359 26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
I0307 18:52:21.088383 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:21.120677 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:52:21.120706 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:52:21.182999 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:52:21.183047 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:52:21.245976 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:52:21.246016 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:21.346906 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:52:21.346937 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:21.395390 26384 logs.go:123] Gathering logs for container status ...
I0307 18:52:21.395425 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:52:21.428290 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:52:21.428320 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:52:23.941739 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:52:23.942328 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:52:24.240694 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:52:24.240774 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:52:24.270200 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:24.270223 26384 cri.go:87] found id: ""
I0307 18:52:24.270230 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:52:24.270277 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:24.274395 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:52:24.274459 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:52:24.305875 26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:24.305898 26384 cri.go:87] found id: ""
I0307 18:52:24.305919 26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
I0307 18:52:24.305974 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:24.309735 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:52:24.309791 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:52:24.336466 26384 cri.go:87] found id: ""
I0307 18:52:24.336484 26384 logs.go:277] 0 containers: []
W0307 18:52:24.336493 26384 logs.go:279] No container was found matching "coredns"
I0307 18:52:24.336499 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:52:24.336550 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:52:24.364312 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:24.364337 26384 cri.go:87] found id: ""
I0307 18:52:24.364347 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:52:24.364398 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:24.368537 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:52:24.368610 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:52:24.399307 26384 cri.go:87] found id: ""
I0307 18:52:24.399333 26384 logs.go:277] 0 containers: []
W0307 18:52:24.399343 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:52:24.399350 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:52:24.399410 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:52:24.428137 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:24.428157 26384 cri.go:87] found id: ""
I0307 18:52:24.428165 26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
I0307 18:52:24.428220 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:24.432114 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:52:24.432177 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:52:24.458423 26384 cri.go:87] found id: ""
I0307 18:52:24.458443 26384 logs.go:277] 0 containers: []
W0307 18:52:24.458452 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:52:24.458458 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:52:24.458507 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:52:24.486856 26384 cri.go:87] found id: ""
I0307 18:52:24.486881 26384 logs.go:277] 0 containers: []
W0307 18:52:24.486889 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:52:24.486907 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:52:24.486920 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:24.568604 26384 logs.go:123] Gathering logs for container status ...
I0307 18:52:24.568635 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:52:24.609771 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:52:24.609802 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:52:24.665713 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:52:24.665734 26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
I0307 18:52:24.665752 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:24.691910 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:52:24.691937 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:24.723832 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:52:24.723860 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:24.764806 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:52:24.764833 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:52:24.821496 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:52:24.821529 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:52:24.880200 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:52:24.880230 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:52:27.393632 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:52:27.394219 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:52:27.741710 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:52:27.741782 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:52:27.770323 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:27.770343 26384 cri.go:87] found id: ""
I0307 18:52:27.770349 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:52:27.770405 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:27.774285 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:52:27.774345 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:52:27.800912 26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:27.800933 26384 cri.go:87] found id: ""
I0307 18:52:27.800942 26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
I0307 18:52:27.800991 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:27.804444 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:52:27.804490 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:52:27.836265 26384 cri.go:87] found id: ""
I0307 18:52:27.836290 26384 logs.go:277] 0 containers: []
W0307 18:52:27.836297 26384 logs.go:279] No container was found matching "coredns"
I0307 18:52:27.836303 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:52:27.836359 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:52:27.865231 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:27.865260 26384 cri.go:87] found id: ""
I0307 18:52:27.865269 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:52:27.865317 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:27.869523 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:52:27.869586 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:52:27.900740 26384 cri.go:87] found id: ""
I0307 18:52:27.900770 26384 logs.go:277] 0 containers: []
W0307 18:52:27.900780 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:52:27.900787 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:52:27.900849 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:52:27.929343 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:27.929371 26384 cri.go:87] found id: ""
I0307 18:52:27.929381 26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
I0307 18:52:27.929440 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:27.933280 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:52:27.933348 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:52:27.966078 26384 cri.go:87] found id: ""
I0307 18:52:27.966104 26384 logs.go:277] 0 containers: []
W0307 18:52:27.966111 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:52:27.966119 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:52:27.966175 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:52:27.994539 26384 cri.go:87] found id: ""
I0307 18:52:27.994562 26384 logs.go:277] 0 containers: []
W0307 18:52:27.994568 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:52:27.994581 26384 logs.go:123] Gathering logs for container status ...
I0307 18:52:27.994591 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:52:28.026948 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:52:28.026989 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:52:28.039179 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:52:28.039208 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:52:28.094604 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:52:28.094626 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:52:28.094637 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:28.134457 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:52:28.134490 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:52:28.190768 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:52:28.192394 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:52:28.251450 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:52:28.251489 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:28.285082 26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
I0307 18:52:28.285108 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:28.316724 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:52:28.316750 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:30.901642 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:52:30.902211 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:52:31.241667 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:52:31.241736 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:52:31.271253 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:31.271279 26384 cri.go:87] found id: ""
I0307 18:52:31.271288 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:52:31.271343 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:31.275766 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:52:31.275822 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:52:31.304092 26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:31.304115 26384 cri.go:87] found id: ""
I0307 18:52:31.304121 26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
I0307 18:52:31.304161 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:31.307829 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:52:31.307887 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:52:31.336157 26384 cri.go:87] found id: ""
I0307 18:52:31.336184 26384 logs.go:277] 0 containers: []
W0307 18:52:31.336193 26384 logs.go:279] No container was found matching "coredns"
I0307 18:52:31.336201 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:52:31.336266 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:52:31.362407 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:31.362427 26384 cri.go:87] found id: ""
I0307 18:52:31.362433 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:52:31.362484 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:31.366267 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:52:31.366323 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:52:31.392005 26384 cri.go:87] found id: ""
I0307 18:52:31.392031 26384 logs.go:277] 0 containers: []
W0307 18:52:31.392040 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:52:31.392047 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:52:31.392107 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:52:31.417145 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:31.417164 26384 cri.go:87] found id: ""
I0307 18:52:31.417170 26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
I0307 18:52:31.417226 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:31.421051 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:52:31.421093 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:52:31.452946 26384 cri.go:87] found id: ""
I0307 18:52:31.452966 26384 logs.go:277] 0 containers: []
W0307 18:52:31.452973 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:52:31.452991 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:52:31.453072 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:52:31.482025 26384 cri.go:87] found id: ""
I0307 18:52:31.482048 26384 logs.go:277] 0 containers: []
W0307 18:52:31.482058 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:52:31.482075 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:52:31.482094 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:52:31.535162 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:52:31.535180 26384 logs.go:123] Gathering logs for container status ...
I0307 18:52:31.535190 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:52:31.575114 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:52:31.575149 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:52:31.630597 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:52:31.630629 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:52:31.689816 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:52:31.689854 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:52:31.703439 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:52:31.703465 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:31.733755 26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
I0307 18:52:31.733789 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:31.761485 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:52:31.761517 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:31.849205 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:52:31.849238 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:34.397092 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:52:34.399029 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:52:34.740924 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:52:34.741012 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:52:34.768741 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:34.768769 26384 cri.go:87] found id: ""
I0307 18:52:34.768776 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:52:34.768826 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:34.772560 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:52:34.772608 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:52:34.801197 26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:34.801219 26384 cri.go:87] found id: ""
I0307 18:52:34.801226 26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
I0307 18:52:34.801268 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:34.805070 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:52:34.805123 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:52:34.841217 26384 cri.go:87] found id: ""
I0307 18:52:34.841245 26384 logs.go:277] 0 containers: []
W0307 18:52:34.841258 26384 logs.go:279] No container was found matching "coredns"
I0307 18:52:34.841267 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:52:34.841329 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:52:34.878585 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:34.878643 26384 cri.go:87] found id: ""
I0307 18:52:34.878663 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:52:34.878720 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:34.882566 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:52:34.882625 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:52:34.909524 26384 cri.go:87] found id: ""
I0307 18:52:34.909550 26384 logs.go:277] 0 containers: []
W0307 18:52:34.909557 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:52:34.909565 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:52:34.909613 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:52:34.936954 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:34.936975 26384 cri.go:87] found id: ""
I0307 18:52:34.936983 26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
I0307 18:52:34.937053 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:34.941502 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:52:34.941564 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:52:34.971973 26384 cri.go:87] found id: ""
I0307 18:52:34.971995 26384 logs.go:277] 0 containers: []
W0307 18:52:34.972004 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:52:34.972011 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:52:34.972070 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:52:35.003175 26384 cri.go:87] found id: ""
I0307 18:52:35.003199 26384 logs.go:277] 0 containers: []
W0307 18:52:35.003206 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:52:35.003221 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:52:35.003233 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:52:35.057263 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:52:35.057287 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:52:35.057300 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:35.093840 26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
I0307 18:52:35.093865 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:35.131551 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:52:35.131580 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:35.213034 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:52:35.213066 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:35.250410 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:52:35.250442 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:52:35.305928 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:52:35.305959 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:52:35.366041 26384 logs.go:123] Gathering logs for container status ...
I0307 18:52:35.366074 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:52:35.411044 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:52:35.411068 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:52:37.924460 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:52:37.925115 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:52:38.240997 26384 kubeadm.go:637] restartCluster took 4m28.730822487s
W0307 18:52:38.241143 26384 out.go:239] ! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: cluster wait timed out during healthz check
! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: cluster wait timed out during healthz check
I0307 18:52:38.241176 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0307 18:52:39.540779 26384 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.299584283s)
I0307 18:52:39.540844 26384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0307 18:52:39.554353 26384 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0307 18:52:39.563539 26384 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0307 18:52:39.572536 26384 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0307 18:52:39.572574 26384 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0307 18:52:39.609552 26384 kubeadm.go:322] W0307 18:52:39.601196 5604 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
I0307 18:52:39.746961 26384 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0307 18:56:41.125984 26384 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0307 18:56:41.126127 26384 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
I0307 18:56:41.127655 26384 kubeadm.go:322] [init] Using Kubernetes version: v1.24.4
I0307 18:56:41.127696 26384 kubeadm.go:322] [preflight] Running pre-flight checks
I0307 18:56:41.127765 26384 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0307 18:56:41.127875 26384 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0307 18:56:41.127983 26384 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0307 18:56:41.128061 26384 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0307 18:56:41.130326 26384 out.go:204] - Generating certificates and keys ...
I0307 18:56:41.130393 26384 kubeadm.go:322] [certs] Using existing ca certificate authority
I0307 18:56:41.130451 26384 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0307 18:56:41.130531 26384 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0307 18:56:41.130620 26384 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
I0307 18:56:41.130718 26384 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
I0307 18:56:41.130787 26384 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
I0307 18:56:41.130866 26384 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
I0307 18:56:41.130953 26384 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
I0307 18:56:41.131049 26384 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0307 18:56:41.131155 26384 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0307 18:56:41.131217 26384 kubeadm.go:322] [certs] Using the existing "sa" key
I0307 18:56:41.131292 26384 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0307 18:56:41.131363 26384 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0307 18:56:41.131434 26384 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0307 18:56:41.131523 26384 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0307 18:56:41.131603 26384 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0307 18:56:41.131688 26384 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0307 18:56:41.131762 26384 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0307 18:56:41.131795 26384 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0307 18:56:41.131852 26384 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0307 18:56:41.133514 26384 out.go:204] - Booting up control plane ...
I0307 18:56:41.133618 26384 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0307 18:56:41.133699 26384 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0307 18:56:41.133776 26384 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0307 18:56:41.133863 26384 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0307 18:56:41.134051 26384 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0307 18:56:41.134110 26384 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0307 18:56:41.134119 26384 kubeadm.go:322]
I0307 18:56:41.134162 26384 kubeadm.go:322] Unfortunately, an error has occurred:
I0307 18:56:41.134218 26384 kubeadm.go:322] timed out waiting for the condition
I0307 18:56:41.134224 26384 kubeadm.go:322]
I0307 18:56:41.134270 26384 kubeadm.go:322] This error is likely caused by:
I0307 18:56:41.134347 26384 kubeadm.go:322] - The kubelet is not running
I0307 18:56:41.134504 26384 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0307 18:56:41.134517 26384 kubeadm.go:322]
I0307 18:56:41.134650 26384 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0307 18:56:41.134698 26384 kubeadm.go:322] - 'systemctl status kubelet'
I0307 18:56:41.134741 26384 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0307 18:56:41.134760 26384 kubeadm.go:322]
I0307 18:56:41.134863 26384 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0307 18:56:41.134935 26384 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0307 18:56:41.135037 26384 kubeadm.go:322] Here is one example how you may list all running Kubernetes containers by using crictl:
I0307 18:56:41.135174 26384 kubeadm.go:322] - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
I0307 18:56:41.135274 26384 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0307 18:56:41.135447 26384 kubeadm.go:322] - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
W0307 18:56:41.135604 26384 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0307 18:52:39.601196 5604 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0307 18:52:39.601196 5604 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I0307 18:56:41.135655 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0307 18:56:42.416834 26384 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.281155319s)
I0307 18:56:42.416897 26384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0307 18:56:42.431050 26384 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0307 18:56:42.440667 26384 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0307 18:56:42.440700 26384 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0307 18:56:42.477411 26384 kubeadm.go:322] W0307 18:56:42.461556 7078 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
I0307 18:56:42.627046 26384 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0307 19:00:43.649484 26384 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0307 19:00:43.649599 26384 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
I0307 19:00:43.651218 26384 kubeadm.go:322] [init] Using Kubernetes version: v1.24.4
I0307 19:00:43.651271 26384 kubeadm.go:322] [preflight] Running pre-flight checks
I0307 19:00:43.651420 26384 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0307 19:00:43.651548 26384 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0307 19:00:43.651725 26384 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0307 19:00:43.651796 26384 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0307 19:00:43.654219 26384 out.go:204] - Generating certificates and keys ...
I0307 19:00:43.654288 26384 kubeadm.go:322] [certs] Using existing ca certificate authority
I0307 19:00:43.654338 26384 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0307 19:00:43.654403 26384 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0307 19:00:43.654458 26384 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
I0307 19:00:43.654514 26384 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
I0307 19:00:43.654563 26384 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
I0307 19:00:43.654618 26384 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
I0307 19:00:43.654668 26384 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
I0307 19:00:43.654730 26384 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0307 19:00:43.654798 26384 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0307 19:00:43.654859 26384 kubeadm.go:322] [certs] Using the existing "sa" key
I0307 19:00:43.654935 26384 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0307 19:00:43.654978 26384 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0307 19:00:43.655070 26384 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0307 19:00:43.655168 26384 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0307 19:00:43.655220 26384 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0307 19:00:43.655347 26384 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0307 19:00:43.655430 26384 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0307 19:00:43.655465 26384 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0307 19:00:43.655523 26384 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0307 19:00:43.657162 26384 out.go:204] - Booting up control plane ...
I0307 19:00:43.657245 26384 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0307 19:00:43.657351 26384 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0307 19:00:43.657442 26384 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0307 19:00:43.657533 26384 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0307 19:00:43.657658 26384 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0307 19:00:43.657699 26384 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0307 19:00:43.657705 26384 kubeadm.go:322]
I0307 19:00:43.657736 26384 kubeadm.go:322] Unfortunately, an error has occurred:
I0307 19:00:43.657782 26384 kubeadm.go:322] timed out waiting for the condition
I0307 19:00:43.657789 26384 kubeadm.go:322]
I0307 19:00:43.657829 26384 kubeadm.go:322] This error is likely caused by:
I0307 19:00:43.657862 26384 kubeadm.go:322] - The kubelet is not running
I0307 19:00:43.657966 26384 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0307 19:00:43.657977 26384 kubeadm.go:322]
I0307 19:00:43.658062 26384 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0307 19:00:43.658091 26384 kubeadm.go:322] - 'systemctl status kubelet'
I0307 19:00:43.658134 26384 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0307 19:00:43.658142 26384 kubeadm.go:322]
I0307 19:00:43.658255 26384 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0307 19:00:43.658393 26384 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0307 19:00:43.658480 26384 kubeadm.go:322] Here is one example how you may list all running Kubernetes containers by using crictl:
I0307 19:00:43.658603 26384 kubeadm.go:322] - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
I0307 19:00:43.658702 26384 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0307 19:00:43.658828 26384 kubeadm.go:322] - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
I0307 19:00:43.658871 26384 kubeadm.go:403] StartCluster complete in 12m34.187466467s
I0307 19:00:43.658927 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 19:00:43.658974 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 19:00:43.701064 26384 cri.go:87] found id: "4c3f077f022bdea89cb8bf2589173b3be31c0e185e35fd928616ce4549fb87dc"
I0307 19:00:43.701086 26384 cri.go:87] found id: ""
I0307 19:00:43.701098 26384 logs.go:277] 1 containers: [4c3f077f022bdea89cb8bf2589173b3be31c0e185e35fd928616ce4549fb87dc]
I0307 19:00:43.701142 26384 ssh_runner.go:195] Run: which crictl
I0307 19:00:43.705362 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 19:00:43.705417 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 19:00:43.734452 26384 cri.go:87] found id: "c6ea84a251b2a68faf0c7bc662a34e8da962550ddfb0892eac5c9cabe219fd56"
I0307 19:00:43.734469 26384 cri.go:87] found id: ""
I0307 19:00:43.734476 26384 logs.go:277] 1 containers: [c6ea84a251b2a68faf0c7bc662a34e8da962550ddfb0892eac5c9cabe219fd56]
I0307 19:00:43.734531 26384 ssh_runner.go:195] Run: which crictl
I0307 19:00:43.739954 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 19:00:43.740015 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 19:00:43.766381 26384 cri.go:87] found id: ""
I0307 19:00:43.766402 26384 logs.go:277] 0 containers: []
W0307 19:00:43.766408 26384 logs.go:279] No container was found matching "coredns"
I0307 19:00:43.766413 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 19:00:43.766453 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 19:00:43.796840 26384 cri.go:87] found id: "1d5f6f3ec60ee126296dc37837b2c164122f271fbf16e8adf26153a72448ce41"
I0307 19:00:43.796867 26384 cri.go:87] found id: ""
I0307 19:00:43.796875 26384 logs.go:277] 1 containers: [1d5f6f3ec60ee126296dc37837b2c164122f271fbf16e8adf26153a72448ce41]
I0307 19:00:43.796929 26384 ssh_runner.go:195] Run: which crictl
I0307 19:00:43.801100 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 19:00:43.801154 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 19:00:43.830552 26384 cri.go:87] found id: ""
I0307 19:00:43.830577 26384 logs.go:277] 0 containers: []
W0307 19:00:43.830584 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 19:00:43.830589 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 19:00:43.830637 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 19:00:43.867303 26384 cri.go:87] found id: "8f74b327d355ba8b122085b2bd262e7f6a18dde235bc9efbb62fef4f6f4a4c06"
I0307 19:00:43.867324 26384 cri.go:87] found id: ""
I0307 19:00:43.867331 26384 logs.go:277] 1 containers: [8f74b327d355ba8b122085b2bd262e7f6a18dde235bc9efbb62fef4f6f4a4c06]
I0307 19:00:43.867370 26384 ssh_runner.go:195] Run: which crictl
I0307 19:00:43.871114 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 19:00:43.871164 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 19:00:43.904677 26384 cri.go:87] found id: ""
I0307 19:00:43.904703 26384 logs.go:277] 0 containers: []
W0307 19:00:43.904709 26384 logs.go:279] No container was found matching "kindnet"
I0307 19:00:43.904715 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 19:00:43.904758 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 19:00:43.944324 26384 cri.go:87] found id: ""
I0307 19:00:43.944349 26384 logs.go:277] 0 containers: []
W0307 19:00:43.944359 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 19:00:43.944378 26384 logs.go:123] Gathering logs for containerd ...
I0307 19:00:43.944395 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 19:00:44.011972 26384 logs.go:123] Gathering logs for kubelet ...
I0307 19:00:44.012003 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 19:00:44.077224 26384 logs.go:123] Gathering logs for dmesg ...
I0307 19:00:44.077258 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 19:00:44.091281 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 19:00:44.091305 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 19:00:44.158036 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 19:00:44.158054 26384 logs.go:123] Gathering logs for etcd [c6ea84a251b2a68faf0c7bc662a34e8da962550ddfb0892eac5c9cabe219fd56] ...
I0307 19:00:44.158065 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6ea84a251b2a68faf0c7bc662a34e8da962550ddfb0892eac5c9cabe219fd56"
I0307 19:00:44.193518 26384 logs.go:123] Gathering logs for kube-scheduler [1d5f6f3ec60ee126296dc37837b2c164122f271fbf16e8adf26153a72448ce41] ...
I0307 19:00:44.193546 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d5f6f3ec60ee126296dc37837b2c164122f271fbf16e8adf26153a72448ce41"
I0307 19:00:44.281107 26384 logs.go:123] Gathering logs for kube-apiserver [4c3f077f022bdea89cb8bf2589173b3be31c0e185e35fd928616ce4549fb87dc] ...
I0307 19:00:44.281138 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c3f077f022bdea89cb8bf2589173b3be31c0e185e35fd928616ce4549fb87dc"
I0307 19:00:44.321328 26384 logs.go:123] Gathering logs for kube-controller-manager [8f74b327d355ba8b122085b2bd262e7f6a18dde235bc9efbb62fef4f6f4a4c06] ...
I0307 19:00:44.321353 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f74b327d355ba8b122085b2bd262e7f6a18dde235bc9efbb62fef4f6f4a4c06"
I0307 19:00:44.370028 26384 logs.go:123] Gathering logs for container status ...
I0307 19:00:44.370058 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W0307 19:00:44.410088 26384 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0307 18:56:42.461556 7078 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0307 19:00:44.410135 26384 out.go:239] *
*
W0307 19:00:44.410302 26384 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0307 18:56:42.461556 7078 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0307 18:56:42.461556 7078 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0307 19:00:44.410323 26384 out.go:239] *
*
W0307 19:00:44.411225 26384 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0307 19:00:44.414682 26384 out.go:177]
W0307 19:00:44.416349 26384 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0307 18:56:42.461556 7078 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0307 18:56:42.461556 7078 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0307 19:00:44.416447 26384 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0307 19:00:44.416516 26384 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I0307 19:00:44.419274 26384 out.go:177]
** /stderr **
preload_test.go:73: out/minikube-linux-amd64 start -p test-preload-203208 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 --container-runtime=containerd failed: exit status 109
panic.go:522: *** TestPreload FAILED at 2023-03-07 19:00:44.71260395 +0000 UTC m=+3530.150347056
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-203208 -n test-preload-203208
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-203208 -n test-preload-203208: exit status 2 (226.296039ms)
-- stdout --
Running
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p test-preload-203208 logs -n 25
helpers_test.go:252: TestPreload logs:
-- stdout --
*
* ==> Audit <==
* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| cp | multinode-373242 cp multinode-373242-m03:/home/docker/cp-test.txt | multinode-373242 | jenkins | v1.29.0 | 07 Mar 23 18:24 UTC | 07 Mar 23 18:24 UTC |
| | multinode-373242:/home/docker/cp-test_multinode-373242-m03_multinode-373242.txt | | | | | |
| ssh | multinode-373242 ssh -n | multinode-373242 | jenkins | v1.29.0 | 07 Mar 23 18:24 UTC | 07 Mar 23 18:24 UTC |
| | multinode-373242-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-373242 ssh -n multinode-373242 sudo cat | multinode-373242 | jenkins | v1.29.0 | 07 Mar 23 18:24 UTC | 07 Mar 23 18:24 UTC |
| | /home/docker/cp-test_multinode-373242-m03_multinode-373242.txt | | | | | |
| cp | multinode-373242 cp multinode-373242-m03:/home/docker/cp-test.txt | multinode-373242 | jenkins | v1.29.0 | 07 Mar 23 18:24 UTC | 07 Mar 23 18:24 UTC |
| | multinode-373242-m02:/home/docker/cp-test_multinode-373242-m03_multinode-373242-m02.txt | | | | | |
| ssh | multinode-373242 ssh -n | multinode-373242 | jenkins | v1.29.0 | 07 Mar 23 18:24 UTC | 07 Mar 23 18:24 UTC |
| | multinode-373242-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | multinode-373242 ssh -n multinode-373242-m02 sudo cat | multinode-373242 | jenkins | v1.29.0 | 07 Mar 23 18:24 UTC | 07 Mar 23 18:24 UTC |
| | /home/docker/cp-test_multinode-373242-m03_multinode-373242-m02.txt | | | | | |
| node | multinode-373242 node stop m03 | multinode-373242 | jenkins | v1.29.0 | 07 Mar 23 18:24 UTC | 07 Mar 23 18:24 UTC |
| node | multinode-373242 node start | multinode-373242 | jenkins | v1.29.0 | 07 Mar 23 18:24 UTC | 07 Mar 23 18:26 UTC |
| | m03 --alsologtostderr | | | | | |
| node | list -p multinode-373242 | multinode-373242 | jenkins | v1.29.0 | 07 Mar 23 18:26 UTC | |
| stop | -p multinode-373242 | multinode-373242 | jenkins | v1.29.0 | 07 Mar 23 18:26 UTC | 07 Mar 23 18:29 UTC |
| start | -p multinode-373242 | multinode-373242 | jenkins | v1.29.0 | 07 Mar 23 18:29 UTC | 07 Mar 23 18:35 UTC |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| node | list -p multinode-373242 | multinode-373242 | jenkins | v1.29.0 | 07 Mar 23 18:35 UTC | |
| node | multinode-373242 node delete | multinode-373242 | jenkins | v1.29.0 | 07 Mar 23 18:35 UTC | 07 Mar 23 18:35 UTC |
| | m03 | | | | | |
| stop | multinode-373242 stop | multinode-373242 | jenkins | v1.29.0 | 07 Mar 23 18:35 UTC | 07 Mar 23 18:38 UTC |
| start | -p multinode-373242 | multinode-373242 | jenkins | v1.29.0 | 07 Mar 23 18:38 UTC | 07 Mar 23 18:42 UTC |
| | --wait=true -v=8 | | | | | |
| | --alsologtostderr | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| node | list -p multinode-373242 | multinode-373242 | jenkins | v1.29.0 | 07 Mar 23 18:42 UTC | |
| start | -p multinode-373242-m02 | multinode-373242-m02 | jenkins | v1.29.0 | 07 Mar 23 18:42 UTC | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p multinode-373242-m03 | multinode-373242-m03 | jenkins | v1.29.0 | 07 Mar 23 18:42 UTC | 07 Mar 23 18:43 UTC |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| node | add -p multinode-373242 | multinode-373242 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | |
| delete | -p multinode-373242-m03 | multinode-373242-m03 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | 07 Mar 23 18:43 UTC |
| delete | -p multinode-373242 | multinode-373242 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | 07 Mar 23 18:43 UTC |
| start | -p test-preload-203208 | test-preload-203208 | jenkins | v1.29.0 | 07 Mar 23 18:43 UTC | 07 Mar 23 18:45 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --preload=false --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.24.4 | | | | | |
| ssh | -p test-preload-203208 | test-preload-203208 | jenkins | v1.29.0 | 07 Mar 23 18:45 UTC | 07 Mar 23 18:45 UTC |
| | -- sudo crictl pull | | | | | |
| | gcr.io/k8s-minikube/busybox | | | | | |
| stop | -p test-preload-203208 | test-preload-203208 | jenkins | v1.29.0 | 07 Mar 23 18:45 UTC | 07 Mar 23 18:47 UTC |
| start | -p test-preload-203208 | test-preload-203208 | jenkins | v1.29.0 | 07 Mar 23 18:47 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --wait=true --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/03/07 18:47:08
Running on machine: ubuntu-20-agent-5
Binary: Built with gc go1.20.1 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0307 18:47:08.188999 26384 out.go:296] Setting OutFile to fd 1 ...
I0307 18:47:08.189163 26384 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0307 18:47:08.189221 26384 out.go:309] Setting ErrFile to fd 2...
I0307 18:47:08.189235 26384 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0307 18:47:08.189633 26384 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15985-4052/.minikube/bin
I0307 18:47:08.190229 26384 out.go:303] Setting JSON to false
I0307 18:47:08.191033 26384 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":5376,"bootTime":1678209452,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0307 18:47:08.191096 26384 start.go:135] virtualization: kvm guest
I0307 18:47:08.193540 26384 out.go:177] * [test-preload-203208] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
I0307 18:47:08.195219 26384 out.go:177] - MINIKUBE_LOCATION=15985
I0307 18:47:08.196770 26384 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0307 18:47:08.195178 26384 notify.go:220] Checking for updates...
I0307 18:47:08.198392 26384 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15985-4052/kubeconfig
I0307 18:47:08.199832 26384 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15985-4052/.minikube
I0307 18:47:08.201253 26384 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0307 18:47:08.202663 26384 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0307 18:47:08.204748 26384 config.go:182] Loaded profile config "test-preload-203208": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
I0307 18:47:08.205285 26384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0307 18:47:08.205342 26384 main.go:141] libmachine: Launching plugin server for driver kvm2
I0307 18:47:08.220069 26384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43611
I0307 18:47:08.220563 26384 main.go:141] libmachine: () Calling .GetVersion
I0307 18:47:08.221076 26384 main.go:141] libmachine: Using API Version 1
I0307 18:47:08.221096 26384 main.go:141] libmachine: () Calling .SetConfigRaw
I0307 18:47:08.221432 26384 main.go:141] libmachine: () Calling .GetMachineName
I0307 18:47:08.221584 26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
I0307 18:47:08.223753 26384 out.go:177] * Kubernetes 1.26.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.2
I0307 18:47:08.225235 26384 driver.go:365] Setting default libvirt URI to qemu:///system
I0307 18:47:08.225524 26384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0307 18:47:08.225572 26384 main.go:141] libmachine: Launching plugin server for driver kvm2
I0307 18:47:08.239705 26384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42799
I0307 18:47:08.240091 26384 main.go:141] libmachine: () Calling .GetVersion
I0307 18:47:08.240557 26384 main.go:141] libmachine: Using API Version 1
I0307 18:47:08.240573 26384 main.go:141] libmachine: () Calling .SetConfigRaw
I0307 18:47:08.240906 26384 main.go:141] libmachine: () Calling .GetMachineName
I0307 18:47:08.241120 26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
I0307 18:47:08.275331 26384 out.go:177] * Using the kvm2 driver based on existing profile
I0307 18:47:08.276690 26384 start.go:296] selected driver: kvm2
I0307 18:47:08.276702 26384 start.go:857] validating driver "kvm2" against &{Name:test-preload-203208 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-203208 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0307 18:47:08.276795 26384 start.go:868] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0307 18:47:08.277360 26384 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0307 18:47:08.277421 26384 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15985-4052/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0307 18:47:08.291366 26384 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.29.0
I0307 18:47:08.291664 26384 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0307 18:47:08.291694 26384 cni.go:84] Creating CNI manager for ""
I0307 18:47:08.291705 26384 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
I0307 18:47:08.291717 26384 start_flags.go:319] config:
{Name:test-preload-203208 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-203208 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0307 18:47:08.291838 26384 iso.go:125] acquiring lock: {Name:mkd51cb229a70df75d89beefefdcafed4c3dd9f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0307 18:47:08.293852 26384 out.go:177] * Starting control plane node test-preload-203208 in cluster test-preload-203208
I0307 18:47:08.296143 26384 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
I0307 18:47:08.450857 26384 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4
I0307 18:47:08.450906 26384 cache.go:57] Caching tarball of preloaded images
I0307 18:47:08.451048 26384 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
I0307 18:47:08.453213 26384 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
I0307 18:47:08.454642 26384 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
I0307 18:47:08.614514 26384 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4?checksum=md5:41d292e9d8b8bb8fdf3bc94dc3c43bf0 -> /home/jenkins/minikube-integration/15985-4052/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4
I0307 18:47:32.826448 26384 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
I0307 18:47:32.826536 26384 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15985-4052/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
I0307 18:47:33.690125 26384 cache.go:60] Finished verifying existence of preloaded tar for v1.24.4 on containerd
I0307 18:47:33.690264 26384 profile.go:148] Saving config to /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/config.json ...
I0307 18:47:33.690465 26384 cache.go:193] Successfully downloaded all kic artifacts
I0307 18:47:33.690499 26384 start.go:364] acquiring machines lock for test-preload-203208: {Name:mk86d1042b74b1a783c77f2a2445172eb6d30958 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0307 18:47:33.690551 26384 start.go:368] acquired machines lock for "test-preload-203208" in 35.693µs
I0307 18:47:33.690566 26384 start.go:96] Skipping create...Using existing machine configuration
I0307 18:47:33.690574 26384 fix.go:55] fixHost starting:
I0307 18:47:33.690832 26384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0307 18:47:33.690865 26384 main.go:141] libmachine: Launching plugin server for driver kvm2
I0307 18:47:33.704555 26384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37575
I0307 18:47:33.704995 26384 main.go:141] libmachine: () Calling .GetVersion
I0307 18:47:33.705526 26384 main.go:141] libmachine: Using API Version 1
I0307 18:47:33.705549 26384 main.go:141] libmachine: () Calling .SetConfigRaw
I0307 18:47:33.705815 26384 main.go:141] libmachine: () Calling .GetMachineName
I0307 18:47:33.706046 26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
I0307 18:47:33.706249 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetState
I0307 18:47:33.707747 26384 fix.go:103] recreateIfNeeded on test-preload-203208: state=Stopped err=<nil>
I0307 18:47:33.707767 26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
W0307 18:47:33.707933 26384 fix.go:129] unexpected machine state, will restart: <nil>
I0307 18:47:33.710555 26384 out.go:177] * Restarting existing kvm2 VM for "test-preload-203208" ...
I0307 18:47:33.712032 26384 main.go:141] libmachine: (test-preload-203208) Calling .Start
I0307 18:47:33.712220 26384 main.go:141] libmachine: (test-preload-203208) Ensuring networks are active...
I0307 18:47:33.712842 26384 main.go:141] libmachine: (test-preload-203208) Ensuring network default is active
I0307 18:47:33.713296 26384 main.go:141] libmachine: (test-preload-203208) Ensuring network mk-test-preload-203208 is active
I0307 18:47:33.713652 26384 main.go:141] libmachine: (test-preload-203208) Getting domain xml...
I0307 18:47:33.714346 26384 main.go:141] libmachine: (test-preload-203208) Creating domain...
I0307 18:47:34.910876 26384 main.go:141] libmachine: (test-preload-203208) Waiting to get IP...
I0307 18:47:34.911746 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:34.912163 26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
I0307 18:47:34.912255 26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:34.912165 26419 retry.go:31] will retry after 212.425256ms: waiting for machine to come up
I0307 18:47:35.126663 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:35.127105 26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
I0307 18:47:35.127129 26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:35.127053 26419 retry.go:31] will retry after 263.969499ms: waiting for machine to come up
I0307 18:47:35.392652 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:35.393060 26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
I0307 18:47:35.393084 26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:35.393015 26419 retry.go:31] will retry after 468.684911ms: waiting for machine to come up
I0307 18:47:35.863601 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:35.864010 26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
I0307 18:47:35.864033 26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:35.863947 26419 retry.go:31] will retry after 431.412452ms: waiting for machine to come up
I0307 18:47:36.296448 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:36.296882 26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
I0307 18:47:36.296912 26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:36.296828 26419 retry.go:31] will retry after 752.77311ms: waiting for machine to come up
I0307 18:47:37.050685 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:37.051090 26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
I0307 18:47:37.051119 26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:37.051041 26419 retry.go:31] will retry after 743.261623ms: waiting for machine to come up
I0307 18:47:37.795856 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:37.796272 26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
I0307 18:47:37.796308 26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:37.796215 26419 retry.go:31] will retry after 1.170690029s: waiting for machine to come up
I0307 18:47:38.968781 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:38.969233 26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
I0307 18:47:38.969258 26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:38.969184 26419 retry.go:31] will retry after 1.337094513s: waiting for machine to come up
I0307 18:47:40.308636 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:40.309023 26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
I0307 18:47:40.309045 26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:40.308986 26419 retry.go:31] will retry after 1.490851661s: waiting for machine to come up
I0307 18:47:41.801795 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:41.802239 26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
I0307 18:47:41.802269 26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:41.802176 26419 retry.go:31] will retry after 2.070649174s: waiting for machine to come up
I0307 18:47:43.874879 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:43.875349 26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
I0307 18:47:43.875380 26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:43.875281 26419 retry.go:31] will retry after 2.737681725s: waiting for machine to come up
I0307 18:47:46.616128 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:46.616688 26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
I0307 18:47:46.616712 26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:46.616637 26419 retry.go:31] will retry after 2.87929565s: waiting for machine to come up
I0307 18:47:49.497470 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:49.498002 26384 main.go:141] libmachine: (test-preload-203208) DBG | unable to find current IP address of domain test-preload-203208 in network mk-test-preload-203208
I0307 18:47:49.498030 26384 main.go:141] libmachine: (test-preload-203208) DBG | I0307 18:47:49.497932 26419 retry.go:31] will retry after 4.103227875s: waiting for machine to come up
I0307 18:47:53.606187 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:53.606663 26384 main.go:141] libmachine: (test-preload-203208) Found IP for machine: 192.168.39.212
I0307 18:47:53.606696 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has current primary IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:53.606703 26384 main.go:141] libmachine: (test-preload-203208) Reserving static IP address...
I0307 18:47:53.607103 26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "test-preload-203208", mac: "52:54:00:c5:37:98", ip: "192.168.39.212"} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
I0307 18:47:53.607138 26384 main.go:141] libmachine: (test-preload-203208) Reserved static IP address: 192.168.39.212
I0307 18:47:53.607159 26384 main.go:141] libmachine: (test-preload-203208) DBG | skip adding static IP to network mk-test-preload-203208 - found existing host DHCP lease matching {name: "test-preload-203208", mac: "52:54:00:c5:37:98", ip: "192.168.39.212"}
I0307 18:47:53.607180 26384 main.go:141] libmachine: (test-preload-203208) DBG | Getting to WaitForSSH function...
I0307 18:47:53.607195 26384 main.go:141] libmachine: (test-preload-203208) Waiting for SSH to be available...
I0307 18:47:53.609451 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:53.609920 26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
I0307 18:47:53.609952 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:53.610021 26384 main.go:141] libmachine: (test-preload-203208) DBG | Using SSH client type: external
I0307 18:47:53.610088 26384 main.go:141] libmachine: (test-preload-203208) DBG | Using SSH private key: /home/jenkins/minikube-integration/15985-4052/.minikube/machines/test-preload-203208/id_rsa (-rw-------)
I0307 18:47:53.610128 26384 main.go:141] libmachine: (test-preload-203208) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.212 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/15985-4052/.minikube/machines/test-preload-203208/id_rsa -p 22] /usr/bin/ssh <nil>}
I0307 18:47:53.610153 26384 main.go:141] libmachine: (test-preload-203208) DBG | About to run SSH command:
I0307 18:47:53.610166 26384 main.go:141] libmachine: (test-preload-203208) DBG | exit 0
I0307 18:47:53.693376 26384 main.go:141] libmachine: (test-preload-203208) DBG | SSH cmd err, output: <nil>:
I0307 18:47:53.693716 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetConfigRaw
I0307 18:47:53.694380 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetIP
I0307 18:47:53.696583 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:53.696983 26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
I0307 18:47:53.697018 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:53.697232 26384 profile.go:148] Saving config to /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/config.json ...
I0307 18:47:53.697422 26384 machine.go:88] provisioning docker machine ...
I0307 18:47:53.697443 26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
I0307 18:47:53.697627 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetMachineName
I0307 18:47:53.697782 26384 buildroot.go:166] provisioning hostname "test-preload-203208"
I0307 18:47:53.697798 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetMachineName
I0307 18:47:53.697947 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHHostname
I0307 18:47:53.699860 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:53.700195 26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
I0307 18:47:53.700225 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:53.700341 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHPort
I0307 18:47:53.700502 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
I0307 18:47:53.700619 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
I0307 18:47:53.700716 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHUsername
I0307 18:47:53.700853 26384 main.go:141] libmachine: Using SSH client type: native
I0307 18:47:53.701264 26384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil> [] 0s} 192.168.39.212 22 <nil> <nil>}
I0307 18:47:53.701276 26384 main.go:141] libmachine: About to run SSH command:
sudo hostname test-preload-203208 && echo "test-preload-203208" | sudo tee /etc/hostname
I0307 18:47:53.818077 26384 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-203208
I0307 18:47:53.818106 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHHostname
I0307 18:47:53.820950 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:53.821308 26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
I0307 18:47:53.821334 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:53.821486 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHPort
I0307 18:47:53.821689 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
I0307 18:47:53.821852 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
I0307 18:47:53.822005 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHUsername
I0307 18:47:53.822192 26384 main.go:141] libmachine: Using SSH client type: native
I0307 18:47:53.822574 26384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil> [] 0s} 192.168.39.212 22 <nil> <nil>}
I0307 18:47:53.822590 26384 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\stest-preload-203208' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-203208/g' /etc/hosts;
else
echo '127.0.1.1 test-preload-203208' | sudo tee -a /etc/hosts;
fi
fi
I0307 18:47:53.938498 26384 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0307 18:47:53.938531 26384 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15985-4052/.minikube CaCertPath:/home/jenkins/minikube-integration/15985-4052/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15985-4052/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15985-4052/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15985-4052/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15985-4052/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15985-4052/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15985-4052/.minikube}
I0307 18:47:53.938554 26384 buildroot.go:174] setting up certificates
I0307 18:47:53.938564 26384 provision.go:83] configureAuth start
I0307 18:47:53.938577 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetMachineName
I0307 18:47:53.938823 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetIP
I0307 18:47:53.941788 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:53.942174 26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
I0307 18:47:53.942193 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:53.942389 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHHostname
I0307 18:47:53.944344 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:53.944651 26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
I0307 18:47:53.944679 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:53.944819 26384 provision.go:138] copyHostCerts
I0307 18:47:53.944864 26384 exec_runner.go:144] found /home/jenkins/minikube-integration/15985-4052/.minikube/cert.pem, removing ...
I0307 18:47:53.944874 26384 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15985-4052/.minikube/cert.pem
I0307 18:47:53.944936 26384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15985-4052/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15985-4052/.minikube/cert.pem (1123 bytes)
I0307 18:47:53.945028 26384 exec_runner.go:144] found /home/jenkins/minikube-integration/15985-4052/.minikube/key.pem, removing ...
I0307 18:47:53.945042 26384 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15985-4052/.minikube/key.pem
I0307 18:47:53.945069 26384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15985-4052/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15985-4052/.minikube/key.pem (1679 bytes)
I0307 18:47:53.945118 26384 exec_runner.go:144] found /home/jenkins/minikube-integration/15985-4052/.minikube/ca.pem, removing ...
I0307 18:47:53.945125 26384 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15985-4052/.minikube/ca.pem
I0307 18:47:53.945144 26384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15985-4052/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15985-4052/.minikube/ca.pem (1078 bytes)
I0307 18:47:53.945185 26384 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15985-4052/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15985-4052/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15985-4052/.minikube/certs/ca-key.pem org=jenkins.test-preload-203208 san=[192.168.39.212 192.168.39.212 localhost 127.0.0.1 minikube test-preload-203208]
I0307 18:47:54.280078 26384 provision.go:172] copyRemoteCerts
I0307 18:47:54.280140 26384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0307 18:47:54.280162 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHHostname
I0307 18:47:54.282745 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:54.283051 26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
I0307 18:47:54.283081 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:54.283221 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHPort
I0307 18:47:54.283408 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
I0307 18:47:54.283548 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHUsername
I0307 18:47:54.283668 26384 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15985-4052/.minikube/machines/test-preload-203208/id_rsa Username:docker}
I0307 18:47:54.366577 26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0307 18:47:54.389837 26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I0307 18:47:54.411718 26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0307 18:47:54.433964 26384 provision.go:86] duration metric: configureAuth took 495.388641ms
I0307 18:47:54.433989 26384 buildroot.go:189] setting minikube options for container-runtime
I0307 18:47:54.434187 26384 config.go:182] Loaded profile config "test-preload-203208": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
I0307 18:47:54.434202 26384 machine.go:91] provisioned docker machine in 736.766542ms
I0307 18:47:54.434211 26384 start.go:300] post-start starting for "test-preload-203208" (driver="kvm2")
I0307 18:47:54.434220 26384 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0307 18:47:54.434345 26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
I0307 18:47:54.434642 26384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0307 18:47:54.434666 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHHostname
I0307 18:47:54.437421 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:54.437782 26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
I0307 18:47:54.437822 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:54.437973 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHPort
I0307 18:47:54.438168 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
I0307 18:47:54.438298 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHUsername
I0307 18:47:54.438399 26384 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15985-4052/.minikube/machines/test-preload-203208/id_rsa Username:docker}
I0307 18:47:54.518617 26384 ssh_runner.go:195] Run: cat /etc/os-release
I0307 18:47:54.522870 26384 info.go:137] Remote host: Buildroot 2021.02.12
I0307 18:47:54.522893 26384 filesync.go:126] Scanning /home/jenkins/minikube-integration/15985-4052/.minikube/addons for local assets ...
I0307 18:47:54.522953 26384 filesync.go:126] Scanning /home/jenkins/minikube-integration/15985-4052/.minikube/files for local assets ...
I0307 18:47:54.523037 26384 filesync.go:149] local asset: /home/jenkins/minikube-integration/15985-4052/.minikube/files/etc/ssl/certs/111062.pem -> 111062.pem in /etc/ssl/certs
I0307 18:47:54.523135 26384 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0307 18:47:54.530858 26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/files/etc/ssl/certs/111062.pem --> /etc/ssl/certs/111062.pem (1708 bytes)
I0307 18:47:54.553945 26384 start.go:303] post-start completed in 119.718718ms
I0307 18:47:54.553971 26384 fix.go:57] fixHost completed within 20.863395553s
I0307 18:47:54.553997 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHHostname
I0307 18:47:54.556837 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:54.557183 26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
I0307 18:47:54.557209 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:54.557405 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHPort
I0307 18:47:54.557590 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
I0307 18:47:54.557727 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
I0307 18:47:54.557837 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHUsername
I0307 18:47:54.558046 26384 main.go:141] libmachine: Using SSH client type: native
I0307 18:47:54.558428 26384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1760060] 0x17630e0 <nil> [] 0s} 192.168.39.212 22 <nil> <nil>}
I0307 18:47:54.558440 26384 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0307 18:47:54.666375 26384 main.go:141] libmachine: SSH cmd err, output: <nil>: 1678214874.615825414
I0307 18:47:54.666396 26384 fix.go:207] guest clock: 1678214874.615825414
I0307 18:47:54.666406 26384 fix.go:220] Guest: 2023-03-07 18:47:54.615825414 +0000 UTC Remote: 2023-03-07 18:47:54.553975557 +0000 UTC m=+46.403616421 (delta=61.849857ms)
I0307 18:47:54.666428 26384 fix.go:191] guest clock delta is within tolerance: 61.849857ms
I0307 18:47:54.666435 26384 start.go:83] releasing machines lock for "test-preload-203208", held for 20.975873468s
I0307 18:47:54.666460 26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
I0307 18:47:54.666725 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetIP
I0307 18:47:54.669426 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:54.669811 26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
I0307 18:47:54.669848 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:54.669973 26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
I0307 18:47:54.670422 26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
I0307 18:47:54.670589 26384 main.go:141] libmachine: (test-preload-203208) Calling .DriverName
I0307 18:47:54.670656 26384 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0307 18:47:54.670718 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHHostname
I0307 18:47:54.670826 26384 ssh_runner.go:195] Run: cat /version.json
I0307 18:47:54.670851 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHHostname
I0307 18:47:54.673445 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:54.673511 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:54.673800 26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
I0307 18:47:54.673827 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:54.673938 26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
I0307 18:47:54.673967 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:47:54.674023 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHPort
I0307 18:47:54.674214 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
I0307 18:47:54.674218 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHPort
I0307 18:47:54.674394 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHKeyPath
I0307 18:47:54.674402 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHUsername
I0307 18:47:54.674565 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetSSHUsername
I0307 18:47:54.674569 26384 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15985-4052/.minikube/machines/test-preload-203208/id_rsa Username:docker}
I0307 18:47:54.674704 26384 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15985-4052/.minikube/machines/test-preload-203208/id_rsa Username:docker}
I0307 18:47:54.759342 26384 ssh_runner.go:195] Run: systemctl --version
I0307 18:47:54.887421 26384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0307 18:47:54.893321 26384 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0307 18:47:54.893397 26384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0307 18:47:54.911277 26384 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0307 18:47:54.911299 26384 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
I0307 18:47:54.911409 26384 ssh_runner.go:195] Run: sudo crictl images --output json
I0307 18:47:58.947601 26384 ssh_runner.go:235] Completed: sudo crictl images --output json: (4.036162087s)
I0307 18:47:58.947737 26384 containerd.go:604] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
I0307 18:47:58.947802 26384 ssh_runner.go:195] Run: which lz4
I0307 18:47:58.951928 26384 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I0307 18:47:58.955886 26384 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0307 18:47:58.955917 26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458696921 bytes)
I0307 18:48:00.759696 26384 containerd.go:551] Took 1.807807 seconds to copy over tarball
I0307 18:48:00.759760 26384 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0307 18:48:03.914699 26384 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.15491167s)
I0307 18:48:03.914730 26384 containerd.go:558] Took 3.155008 seconds to extract the tarball
I0307 18:48:03.914761 26384 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0307 18:48:03.954806 26384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0307 18:48:04.051307 26384 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0307 18:48:04.067055 26384 start.go:485] detecting cgroup driver to use...
I0307 18:48:04.067143 26384 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0307 18:48:06.737555 26384 ssh_runner.go:235] Completed: sudo systemctl stop -f crio: (2.670382401s)
I0307 18:48:06.737634 26384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0307 18:48:06.749559 26384 docker.go:186] disabling cri-docker service (if available) ...
I0307 18:48:06.749615 26384 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0307 18:48:06.761329 26384 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0307 18:48:06.773038 26384 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0307 18:48:06.870678 26384 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0307 18:48:06.979667 26384 docker.go:202] disabling docker service ...
I0307 18:48:06.979735 26384 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0307 18:48:06.992492 26384 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0307 18:48:07.004415 26384 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0307 18:48:07.107126 26384 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0307 18:48:07.218342 26384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0307 18:48:07.230717 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0307 18:48:07.248387 26384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.7"|' /etc/containerd/config.toml"
I0307 18:48:07.257036 26384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0307 18:48:07.266682 26384 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0307 18:48:07.266740 26384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0307 18:48:07.276084 26384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0307 18:48:07.285768 26384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0307 18:48:07.295044 26384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0307 18:48:07.304543 26384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0307 18:48:07.314540 26384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0307 18:48:07.324106 26384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0307 18:48:07.332553 26384 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0307 18:48:07.332592 26384 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0307 18:48:07.345783 26384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0307 18:48:07.354423 26384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0307 18:48:07.450860 26384 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0307 18:48:07.472878 26384 start.go:532] Will wait 60s for socket path /run/containerd/containerd.sock
I0307 18:48:07.472979 26384 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0307 18:48:07.480739 26384 retry.go:31] will retry after 1.355526534s: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0307 18:48:08.836380 26384 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0307 18:48:08.842045 26384 start.go:553] Will wait 60s for crictl version
I0307 18:48:08.842108 26384 ssh_runner.go:195] Run: which crictl
I0307 18:48:08.846136 26384 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0307 18:48:08.879500 26384 start.go:569] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.6.18
RuntimeApiVersion: v1alpha2
I0307 18:48:08.879555 26384 ssh_runner.go:195] Run: containerd --version
I0307 18:48:08.907039 26384 ssh_runner.go:195] Run: containerd --version
I0307 18:48:08.937824 26384 out.go:177] * Preparing Kubernetes v1.24.4 on containerd 1.6.18 ...
I0307 18:48:08.939189 26384 main.go:141] libmachine: (test-preload-203208) Calling .GetIP
I0307 18:48:08.941766 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:48:08.942253 26384 main.go:141] libmachine: (test-preload-203208) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:37:98", ip: ""} in network mk-test-preload-203208: {Iface:virbr1 ExpiryTime:2023-03-07 19:47:45 +0000 UTC Type:0 Mac:52:54:00:c5:37:98 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:test-preload-203208 Clientid:01:52:54:00:c5:37:98}
I0307 18:48:08.942274 26384 main.go:141] libmachine: (test-preload-203208) DBG | domain test-preload-203208 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:37:98 in network mk-test-preload-203208
I0307 18:48:08.942470 26384 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0307 18:48:08.946333 26384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0307 18:48:08.958372 26384 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
I0307 18:48:08.958447 26384 ssh_runner.go:195] Run: sudo crictl images --output json
I0307 18:48:08.984433 26384 containerd.go:608] all images are preloaded for containerd runtime.
I0307 18:48:08.984454 26384 containerd.go:522] Images already preloaded, skipping extraction
I0307 18:48:08.984503 26384 ssh_runner.go:195] Run: sudo crictl images --output json
I0307 18:48:09.011132 26384 containerd.go:608] all images are preloaded for containerd runtime.
I0307 18:48:09.011156 26384 cache_images.go:84] Images are preloaded, skipping loading
I0307 18:48:09.011204 26384 ssh_runner.go:195] Run: sudo crictl info
I0307 18:48:09.039874 26384 cni.go:84] Creating CNI manager for ""
I0307 18:48:09.039898 26384 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
I0307 18:48:09.039907 26384 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0307 18:48:09.039928 26384 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.212 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-203208 NodeName:test-preload-203208 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.212"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.212 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0307 18:48:09.040095 26384 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.212
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "test-preload-203208"
kubeletExtraArgs:
node-ip: 192.168.39.212
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.212"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.24.4
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0307 18:48:09.040202 26384 kubeadm.go:968] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=test-preload-203208 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.212
[Install]
config:
{KubernetesVersion:v1.24.4 ClusterName:test-preload-203208 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0307 18:48:09.040264 26384 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
I0307 18:48:09.049030 26384 binaries.go:44] Found k8s binaries, skipping transfer
I0307 18:48:09.049088 26384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0307 18:48:09.057226 26384 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (484 bytes)
I0307 18:48:09.073102 26384 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0307 18:48:09.087939 26384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2107 bytes)
I0307 18:48:09.103091 26384 ssh_runner.go:195] Run: grep 192.168.39.212 control-plane.minikube.internal$ /etc/hosts
I0307 18:48:09.106714 26384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.212 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0307 18:48:09.118609 26384 certs.go:56] Setting up /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208 for IP: 192.168.39.212
I0307 18:48:09.118642 26384 certs.go:186] acquiring lock for shared ca certs: {Name:mk07c09235b5b83043c0b2b2f22c2249661f377a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0307 18:48:09.118791 26384 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15985-4052/.minikube/ca.key
I0307 18:48:09.118849 26384 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15985-4052/.minikube/proxy-client-ca.key
I0307 18:48:09.118912 26384 certs.go:311] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/client.key
I0307 18:48:09.118967 26384 certs.go:311] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/apiserver.key.543da273
I0307 18:48:09.119053 26384 certs.go:311] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/proxy-client.key
I0307 18:48:09.119150 26384 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4052/.minikube/certs/home/jenkins/minikube-integration/15985-4052/.minikube/certs/11106.pem (1338 bytes)
W0307 18:48:09.119182 26384 certs.go:397] ignoring /home/jenkins/minikube-integration/15985-4052/.minikube/certs/home/jenkins/minikube-integration/15985-4052/.minikube/certs/11106_empty.pem, impossibly tiny 0 bytes
I0307 18:48:09.119193 26384 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4052/.minikube/certs/home/jenkins/minikube-integration/15985-4052/.minikube/certs/ca-key.pem (1679 bytes)
I0307 18:48:09.119222 26384 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4052/.minikube/certs/home/jenkins/minikube-integration/15985-4052/.minikube/certs/ca.pem (1078 bytes)
I0307 18:48:09.119259 26384 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4052/.minikube/certs/home/jenkins/minikube-integration/15985-4052/.minikube/certs/cert.pem (1123 bytes)
I0307 18:48:09.119296 26384 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4052/.minikube/certs/home/jenkins/minikube-integration/15985-4052/.minikube/certs/key.pem (1679 bytes)
I0307 18:48:09.119354 26384 certs.go:401] found cert: /home/jenkins/minikube-integration/15985-4052/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15985-4052/.minikube/files/etc/ssl/certs/111062.pem (1708 bytes)
I0307 18:48:09.119887 26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0307 18:48:09.142561 26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0307 18:48:09.164647 26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0307 18:48:09.186856 26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0307 18:48:09.209055 26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0307 18:48:09.233821 26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0307 18:48:09.256607 26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0307 18:48:09.279276 26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0307 18:48:09.301654 26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0307 18:48:09.323040 26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/certs/11106.pem --> /usr/share/ca-certificates/11106.pem (1338 bytes)
I0307 18:48:09.344849 26384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15985-4052/.minikube/files/etc/ssl/certs/111062.pem --> /usr/share/ca-certificates/111062.pem (1708 bytes)
I0307 18:48:09.366857 26384 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0307 18:48:09.382598 26384 ssh_runner.go:195] Run: openssl version
I0307 18:48:09.387988 26384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0307 18:48:09.396852 26384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0307 18:48:09.401359 26384 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 7 18:03 /usr/share/ca-certificates/minikubeCA.pem
I0307 18:48:09.401436 26384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0307 18:48:09.406740 26384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0307 18:48:09.415682 26384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11106.pem && ln -fs /usr/share/ca-certificates/11106.pem /etc/ssl/certs/11106.pem"
I0307 18:48:09.424547 26384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11106.pem
I0307 18:48:09.428975 26384 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 7 18:09 /usr/share/ca-certificates/11106.pem
I0307 18:48:09.429015 26384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11106.pem
I0307 18:48:09.434193 26384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11106.pem /etc/ssl/certs/51391683.0"
I0307 18:48:09.443361 26384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111062.pem && ln -fs /usr/share/ca-certificates/111062.pem /etc/ssl/certs/111062.pem"
I0307 18:48:09.452688 26384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111062.pem
I0307 18:48:09.457057 26384 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 7 18:09 /usr/share/ca-certificates/111062.pem
I0307 18:48:09.457108 26384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111062.pem
I0307 18:48:09.462237 26384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111062.pem /etc/ssl/certs/3ec20f2e.0"
I0307 18:48:09.471411 26384 kubeadm.go:401] StartCluster: {Name:test-preload-203208 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15923/minikube-v1.29.0-1677261626-15923-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1677262057-15923@sha256:ba92f393dd0b7f192b6f8aeacbf781321f089bd4a09957dd77e36bf01f087fc9 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.24.4 ClusterName:test-preload-203208 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0307 18:48:09.471554 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0307 18:48:09.471596 26384 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0307 18:48:09.501095 26384 cri.go:87] found id: ""
I0307 18:48:09.501172 26384 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0307 18:48:09.510140 26384 kubeadm.go:416] found existing configuration files, will attempt cluster restart
I0307 18:48:09.510163 26384 kubeadm.go:633] restartCluster start
I0307 18:48:09.510218 26384 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0307 18:48:09.518643 26384 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0307 18:48:09.519032 26384 kubeconfig.go:135] verify returned: extract IP: "test-preload-203208" does not appear in /home/jenkins/minikube-integration/15985-4052/kubeconfig
I0307 18:48:09.519129 26384 kubeconfig.go:146] "test-preload-203208" context is missing from /home/jenkins/minikube-integration/15985-4052/kubeconfig - will repair!
I0307 18:48:09.519386 26384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15985-4052/kubeconfig: {Name:mk89c8bdc0292c804b7314ba2438e95e1215b3b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0307 18:48:09.519958 26384 kapi.go:59] client config for test-preload-203208: &rest.Config{Host:"https://192.168.39.212:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/client.crt", KeyFile:"/home/jenkins/minikube-integration/15985-4052/.minikube/profiles/test-preload-203208/client.key", CAFile:"/home/jenkins/minikube-integration/15985-4052/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29a5480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0307 18:48:09.520801 26384 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0307 18:48:09.528914 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:09.528956 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:09.538990 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:10.039696 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:10.039767 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:10.050769 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:10.539371 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:10.539470 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:10.550785 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:11.039988 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:11.040093 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:11.051278 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:11.539936 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:11.540040 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:11.551371 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:12.040000 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:12.040077 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:12.051583 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:12.539114 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:12.539176 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:12.550419 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:13.040079 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:13.040172 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:13.052432 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:13.540058 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:13.540141 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:13.551703 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:14.039765 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:14.039847 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:14.051403 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:14.540016 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:14.540094 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:14.552136 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:15.039754 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:15.039852 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:15.051397 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:15.539956 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:15.540068 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:15.551741 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:16.039191 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:16.039261 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:16.050954 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:16.539468 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:16.539533 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:16.550947 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:17.039455 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:17.039523 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:17.050527 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:17.539123 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:17.539207 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:17.551333 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:18.039916 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:18.039999 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:18.051774 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:18.539677 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:18.539783 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:18.551481 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:19.039543 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:19.039622 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:19.051157 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:19.539906 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:19.539971 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:19.551522 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:19.551546 26384 api_server.go:165] Checking apiserver status ...
I0307 18:48:19.551615 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0307 18:48:19.562103 26384 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0307 18:48:19.562127 26384 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
I0307 18:48:19.562135 26384 kubeadm.go:1120] stopping kube-system containers ...
I0307 18:48:19.562145 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I0307 18:48:19.562200 26384 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0307 18:48:19.596473 26384 cri.go:87] found id: ""
I0307 18:48:19.596545 26384 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0307 18:48:19.611484 26384 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0307 18:48:19.620277 26384 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0307 18:48:19.620347 26384 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0307 18:48:19.629402 26384 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0307 18:48:19.629420 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0307 18:48:19.729048 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0307 18:48:20.693486 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0307 18:48:21.045927 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0307 18:48:21.125427 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0307 18:48:21.208989 26384 api_server.go:51] waiting for apiserver process to appear ...
I0307 18:48:21.209053 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:21.727096 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:22.226678 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:22.726635 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:23.227460 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:23.726652 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:24.226895 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:24.727601 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:25.227632 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:25.727342 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:26.226885 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:26.727250 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:27.226755 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:27.727168 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:28.227623 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:28.726792 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:29.227535 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:29.727199 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:30.227533 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:30.726863 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:31.226913 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:31.726742 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:32.226629 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:32.726562 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:33.227256 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:33.727095 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:34.227636 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:34.727529 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:35.226672 26384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0307 18:48:35.239643 26384 api_server.go:71] duration metric: took 14.030659958s to wait for apiserver process to appear ...
I0307 18:48:35.239673 26384 api_server.go:87] waiting for apiserver healthz status ...
I0307 18:48:35.239689 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:48:40.240554 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0307 18:48:40.741289 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:48:45.742137 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0307 18:48:46.240766 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:48:51.241530 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0307 18:48:51.740794 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:48:55.622725 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": read tcp 192.168.39.1:40614->192.168.39.212:8443: read: connection reset by peer
I0307 18:48:55.741069 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:48:55.741730 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:48:56.241350 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:48:56.241974 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:48:56.741625 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:48:56.742311 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:48:57.240872 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:48:57.241486 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:48:57.741098 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:48:57.741815 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:48:58.240688 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:48:58.241449 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:48:58.740916 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:48:58.741450 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:48:59.241002 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:48:59.241562 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:48:59.741376 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:48:59.741967 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:00.241554 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:00.242185 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:00.740765 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:00.741366 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:01.240922 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:01.241524 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:01.741093 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:01.741672 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:02.241289 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:02.241821 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:02.741466 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:02.742055 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:03.240707 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:03.241321 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:03.741112 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:03.741706 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:04.241289 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:04.241805 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:04.741475 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:04.742120 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:05.240659 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:05.241205 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:05.740827 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:05.741407 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:06.240957 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:06.241520 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:06.741097 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:06.741687 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:07.241323 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:07.241898 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:07.741557 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:07.742492 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:08.241389 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:08.242007 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:08.741481 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:08.742046 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:09.240755 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:09.241344 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:09.741175 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:09.741776 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:10.241384 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:10.242065 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:10.741689 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:10.742367 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:11.240908 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:11.241508 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:11.741066 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:11.741702 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:12.241340 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:12.241992 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:12.741591 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:12.742200 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:13.240991 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:13.241618 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:13.741474 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:13.742095 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:14.240668 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:14.241302 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:14.740851 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:14.741426 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:15.240983 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:15.241592 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:15.741169 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:15.741706 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:16.241315 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:16.241927 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:16.741520 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:16.742200 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:17.240744 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:17.241351 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:17.740916 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:22.742180 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0307 18:49:23.240982 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:28.241459 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0307 18:49:28.740696 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:33.740940 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0307 18:49:34.241557 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:37.998029 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": read tcp 192.168.39.1:36774->192.168.39.212:8443: read: connection reset by peer
I0307 18:49:38.240706 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:49:38.240797 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:49:38.274793 26384 cri.go:87] found id: "fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
I0307 18:49:38.274811 26384 cri.go:87] found id: "5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a"
I0307 18:49:38.274816 26384 cri.go:87] found id: ""
I0307 18:49:38.274822 26384 logs.go:277] 2 containers: [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5 5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a]
I0307 18:49:38.274884 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:38.279183 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:38.283139 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:49:38.283194 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:49:38.310826 26384 cri.go:87] found id: "33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
I0307 18:49:38.310844 26384 cri.go:87] found id: ""
I0307 18:49:38.310850 26384 logs.go:277] 1 containers: [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7]
I0307 18:49:38.310891 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:38.314471 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:49:38.314538 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:49:38.344851 26384 cri.go:87] found id: ""
I0307 18:49:38.344881 26384 logs.go:277] 0 containers: []
W0307 18:49:38.344889 26384 logs.go:279] No container was found matching "coredns"
I0307 18:49:38.344894 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:49:38.344965 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:49:38.377525 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:49:38.377548 26384 cri.go:87] found id: ""
I0307 18:49:38.377555 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:49:38.377609 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:38.381815 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:49:38.381869 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:49:38.417825 26384 cri.go:87] found id: ""
I0307 18:49:38.417845 26384 logs.go:277] 0 containers: []
W0307 18:49:38.417851 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:49:38.417855 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:49:38.417925 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:49:38.454042 26384 cri.go:87] found id: "476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
I0307 18:49:38.454062 26384 cri.go:87] found id: "a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4"
I0307 18:49:38.454066 26384 cri.go:87] found id: ""
I0307 18:49:38.454073 26384 logs.go:277] 2 containers: [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6 a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4]
I0307 18:49:38.454130 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:38.458203 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:38.461976 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:49:38.462036 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:49:38.498530 26384 cri.go:87] found id: ""
I0307 18:49:38.498555 26384 logs.go:277] 0 containers: []
W0307 18:49:38.498566 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:49:38.498573 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:49:38.498623 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:49:38.545888 26384 cri.go:87] found id: ""
I0307 18:49:38.545918 26384 logs.go:277] 0 containers: []
W0307 18:49:38.545926 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:49:38.545936 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:49:38.545952 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:49:38.596180 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:49:38.596211 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:49:38.657673 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:49:38.657718 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:49:38.670963 26384 logs.go:123] Gathering logs for kube-apiserver [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5] ...
I0307 18:49:38.670998 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
I0307 18:49:38.710963 26384 logs.go:123] Gathering logs for kube-apiserver [5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a] ...
I0307 18:49:38.710992 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a"
W0307 18:49:38.740233 26384 logs.go:130] failed kube-apiserver [5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a": Process exited with status 1
stdout:
stderr:
E0307 18:49:38.717772 1569 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a\": not found" containerID="5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a"
time="2023-03-07T18:49:38Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a\": not found"
output:
** stderr **
E0307 18:49:38.717772 1569 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a\": not found" containerID="5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a"
time="2023-03-07T18:49:38Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"5e2f1fd0c9332b68ae9134a4ab4e4d5ef3338729f4c8ea086f2d3d3232ad6d6a\": not found"
** /stderr **
I0307 18:49:38.740259 26384 logs.go:123] Gathering logs for etcd [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7] ...
I0307 18:49:38.740272 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
I0307 18:49:38.769176 26384 logs.go:123] Gathering logs for kube-controller-manager [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6] ...
I0307 18:49:38.769208 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
I0307 18:49:38.816001 26384 logs.go:123] Gathering logs for kube-controller-manager [a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4] ...
I0307 18:49:38.816029 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4"
W0307 18:49:38.847807 26384 logs.go:130] failed kube-controller-manager [a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4": Process exited with status 1
stdout:
stderr:
E0307 18:49:38.825690 1584 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4\": not found" containerID="a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4"
time="2023-03-07T18:49:38Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4\": not found"
output:
** stderr **
E0307 18:49:38.825690 1584 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4\": not found" containerID="a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4"
time="2023-03-07T18:49:38Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"a787a08b571a4656fe1fe86d141354c3bfcdc91432d647bf8ba4304de1cea5b4\": not found"
** /stderr **
I0307 18:49:38.847829 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:49:38.847839 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:49:38.960358 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:49:38.960378 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:49:38.960391 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:49:39.024178 26384 logs.go:123] Gathering logs for container status ...
I0307 18:49:39.024209 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:49:41.561116 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:41.561705 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:41.741078 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:49:41.741163 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:49:41.770944 26384 cri.go:87] found id: "fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
I0307 18:49:41.770967 26384 cri.go:87] found id: ""
I0307 18:49:41.770975 26384 logs.go:277] 1 containers: [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5]
I0307 18:49:41.771032 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:41.774913 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:49:41.774977 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:49:41.802816 26384 cri.go:87] found id: "33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
I0307 18:49:41.802838 26384 cri.go:87] found id: ""
I0307 18:49:41.802847 26384 logs.go:277] 1 containers: [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7]
I0307 18:49:41.802892 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:41.806570 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:49:41.806610 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:49:41.835237 26384 cri.go:87] found id: ""
I0307 18:49:41.835270 26384 logs.go:277] 0 containers: []
W0307 18:49:41.835276 26384 logs.go:279] No container was found matching "coredns"
I0307 18:49:41.835281 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:49:41.835337 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:49:41.870305 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:49:41.870323 26384 cri.go:87] found id: ""
I0307 18:49:41.870329 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:49:41.870376 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:41.874332 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:49:41.874383 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:49:41.901971 26384 cri.go:87] found id: ""
I0307 18:49:41.901993 26384 logs.go:277] 0 containers: []
W0307 18:49:41.901999 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:49:41.902005 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:49:41.902057 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:49:41.929792 26384 cri.go:87] found id: "476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
I0307 18:49:41.929823 26384 cri.go:87] found id: ""
I0307 18:49:41.929834 26384 logs.go:277] 1 containers: [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6]
I0307 18:49:41.929885 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:41.933861 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:49:41.933945 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:49:41.962195 26384 cri.go:87] found id: ""
I0307 18:49:41.962222 26384 logs.go:277] 0 containers: []
W0307 18:49:41.962230 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:49:41.962237 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:49:41.962290 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:49:41.990939 26384 cri.go:87] found id: ""
I0307 18:49:41.990965 26384 logs.go:277] 0 containers: []
W0307 18:49:41.990972 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:49:41.990984 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:49:41.990994 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:49:42.052031 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:49:42.052054 26384 logs.go:123] Gathering logs for kube-apiserver [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5] ...
I0307 18:49:42.052069 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
I0307 18:49:42.081594 26384 logs.go:123] Gathering logs for etcd [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7] ...
I0307 18:49:42.081622 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
I0307 18:49:42.109456 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:49:42.109493 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:49:42.177139 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:49:42.177180 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:49:42.226652 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:49:42.226679 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:49:42.287629 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:49:42.287659 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:49:42.299095 26384 logs.go:123] Gathering logs for kube-controller-manager [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6] ...
I0307 18:49:42.299115 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
I0307 18:49:42.340655 26384 logs.go:123] Gathering logs for container status ...
I0307 18:49:42.340684 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:49:44.881007 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:44.881568 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:45.241058 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:49:45.241130 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:49:45.268565 26384 cri.go:87] found id: "fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
I0307 18:49:45.268588 26384 cri.go:87] found id: ""
I0307 18:49:45.268596 26384 logs.go:277] 1 containers: [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5]
I0307 18:49:45.268650 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:45.272618 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:49:45.272685 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:49:45.299447 26384 cri.go:87] found id: "33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
I0307 18:49:45.299471 26384 cri.go:87] found id: ""
I0307 18:49:45.299479 26384 logs.go:277] 1 containers: [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7]
I0307 18:49:45.299528 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:45.303332 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:49:45.303397 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:49:45.332836 26384 cri.go:87] found id: ""
I0307 18:49:45.332863 26384 logs.go:277] 0 containers: []
W0307 18:49:45.332873 26384 logs.go:279] No container was found matching "coredns"
I0307 18:49:45.332881 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:49:45.332989 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:49:45.359776 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:49:45.359795 26384 cri.go:87] found id: ""
I0307 18:49:45.359805 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:49:45.359864 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:45.363663 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:49:45.363725 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:49:45.389419 26384 cri.go:87] found id: ""
I0307 18:49:45.389448 26384 logs.go:277] 0 containers: []
W0307 18:49:45.389459 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:49:45.389465 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:49:45.389523 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:49:45.415773 26384 cri.go:87] found id: "476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
I0307 18:49:45.415796 26384 cri.go:87] found id: ""
I0307 18:49:45.415804 26384 logs.go:277] 1 containers: [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6]
I0307 18:49:45.415860 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:45.419687 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:49:45.419754 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:49:45.448748 26384 cri.go:87] found id: ""
I0307 18:49:45.448777 26384 logs.go:277] 0 containers: []
W0307 18:49:45.448786 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:49:45.448791 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:49:45.448854 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:49:45.474641 26384 cri.go:87] found id: ""
I0307 18:49:45.474669 26384 logs.go:277] 0 containers: []
W0307 18:49:45.474679 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:49:45.474696 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:49:45.474711 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:49:45.486226 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:49:45.486249 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:49:45.545694 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:49:45.545714 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:49:45.545726 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:49:45.591466 26384 logs.go:123] Gathering logs for container status ...
I0307 18:49:45.591493 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:49:45.623810 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:49:45.623841 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:49:45.686240 26384 logs.go:123] Gathering logs for kube-apiserver [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5] ...
I0307 18:49:45.686268 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
I0307 18:49:45.720278 26384 logs.go:123] Gathering logs for etcd [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7] ...
I0307 18:49:45.720302 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
I0307 18:49:45.745876 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:49:45.745913 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:49:45.809485 26384 logs.go:123] Gathering logs for kube-controller-manager [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6] ...
I0307 18:49:45.809518 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
I0307 18:49:48.348770 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:48.349502 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:48.741584 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:49:48.741651 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:49:48.777550 26384 cri.go:87] found id: "fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
I0307 18:49:48.777572 26384 cri.go:87] found id: ""
I0307 18:49:48.777578 26384 logs.go:277] 1 containers: [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5]
I0307 18:49:48.777636 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:48.782172 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:49:48.782233 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:49:48.818792 26384 cri.go:87] found id: "33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
I0307 18:49:48.818817 26384 cri.go:87] found id: ""
I0307 18:49:48.818824 26384 logs.go:277] 1 containers: [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7]
I0307 18:49:48.818869 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:48.823044 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:49:48.823106 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:49:48.857459 26384 cri.go:87] found id: ""
I0307 18:49:48.857484 26384 logs.go:277] 0 containers: []
W0307 18:49:48.857491 26384 logs.go:279] No container was found matching "coredns"
I0307 18:49:48.857498 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:49:48.857556 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:49:48.889707 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:49:48.889728 26384 cri.go:87] found id: ""
I0307 18:49:48.889735 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:49:48.889778 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:48.894345 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:49:48.894420 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:49:48.933590 26384 cri.go:87] found id: ""
I0307 18:49:48.933610 26384 logs.go:277] 0 containers: []
W0307 18:49:48.933617 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:49:48.933622 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:49:48.933667 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:49:48.967476 26384 cri.go:87] found id: "1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
I0307 18:49:48.967495 26384 cri.go:87] found id: "476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
I0307 18:49:48.967499 26384 cri.go:87] found id: ""
I0307 18:49:48.967506 26384 logs.go:277] 2 containers: [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6]
I0307 18:49:48.967549 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:48.971759 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:48.975656 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:49:48.975714 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:49:49.026784 26384 cri.go:87] found id: ""
I0307 18:49:49.026821 26384 logs.go:277] 0 containers: []
W0307 18:49:49.026831 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:49:49.026839 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:49:49.026900 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:49:49.055435 26384 cri.go:87] found id: ""
I0307 18:49:49.055458 26384 logs.go:277] 0 containers: []
W0307 18:49:49.055465 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:49:49.055476 26384 logs.go:123] Gathering logs for container status ...
I0307 18:49:49.055490 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:49:49.089020 26384 logs.go:123] Gathering logs for kube-controller-manager [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6] ...
I0307 18:49:49.089048 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
I0307 18:49:49.138877 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:49:49.138913 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:49:49.153088 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:49:49.153113 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:49:49.220054 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:49:49.220079 26384 logs.go:123] Gathering logs for kube-apiserver [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5] ...
I0307 18:49:49.220098 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
I0307 18:49:49.260102 26384 logs.go:123] Gathering logs for etcd [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7] ...
I0307 18:49:49.260132 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
I0307 18:49:49.288829 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:49:49.288855 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:49:49.360373 26384 logs.go:123] Gathering logs for kube-controller-manager [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611] ...
I0307 18:49:49.360411 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
I0307 18:49:49.390432 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:49:49.390471 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:49:49.438326 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:49:49.438360 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:49:51.999825 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:52.000476 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:52.240790 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:49:52.240869 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:49:52.268760 26384 cri.go:87] found id: "fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
I0307 18:49:52.268782 26384 cri.go:87] found id: ""
I0307 18:49:52.268790 26384 logs.go:277] 1 containers: [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5]
I0307 18:49:52.268860 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:52.273290 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:49:52.273355 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:49:52.303004 26384 cri.go:87] found id: "33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
I0307 18:49:52.303024 26384 cri.go:87] found id: ""
I0307 18:49:52.303031 26384 logs.go:277] 1 containers: [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7]
I0307 18:49:52.303070 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:52.307394 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:49:52.307454 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:49:52.334227 26384 cri.go:87] found id: ""
I0307 18:49:52.334252 26384 logs.go:277] 0 containers: []
W0307 18:49:52.334259 26384 logs.go:279] No container was found matching "coredns"
I0307 18:49:52.334263 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:49:52.334308 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:49:52.365944 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:49:52.365964 26384 cri.go:87] found id: ""
I0307 18:49:52.365971 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:49:52.366014 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:52.369575 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:49:52.369631 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:49:52.399970 26384 cri.go:87] found id: ""
I0307 18:49:52.399998 26384 logs.go:277] 0 containers: []
W0307 18:49:52.400008 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:49:52.400015 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:49:52.400080 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:49:52.428372 26384 cri.go:87] found id: "1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
I0307 18:49:52.428394 26384 cri.go:87] found id: "476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
I0307 18:49:52.428399 26384 cri.go:87] found id: ""
I0307 18:49:52.428404 26384 logs.go:277] 2 containers: [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6]
I0307 18:49:52.428452 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:52.432426 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:52.436419 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:49:52.436468 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:49:52.465745 26384 cri.go:87] found id: ""
I0307 18:49:52.465777 26384 logs.go:277] 0 containers: []
W0307 18:49:52.465786 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:49:52.465794 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:49:52.465851 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:49:52.493993 26384 cri.go:87] found id: ""
I0307 18:49:52.494022 26384 logs.go:277] 0 containers: []
W0307 18:49:52.494032 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:49:52.494048 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:49:52.494063 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:49:52.562310 26384 logs.go:123] Gathering logs for container status ...
I0307 18:49:52.562349 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:49:52.601842 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:49:52.601867 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:49:52.663702 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:49:52.663735 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:49:52.676175 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:49:52.676205 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:49:52.725457 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:49:52.725478 26384 logs.go:123] Gathering logs for kube-controller-manager [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6] ...
I0307 18:49:52.725491 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
I0307 18:49:52.773421 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:49:52.773446 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:49:52.820180 26384 logs.go:123] Gathering logs for kube-apiserver [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5] ...
I0307 18:49:52.820212 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
I0307 18:49:52.854035 26384 logs.go:123] Gathering logs for etcd [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7] ...
I0307 18:49:52.854060 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
I0307 18:49:52.882963 26384 logs.go:123] Gathering logs for kube-controller-manager [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611] ...
I0307 18:49:52.882993 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
I0307 18:49:55.412727 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:55.413292 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:55.740694 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:49:55.740782 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:49:55.769593 26384 cri.go:87] found id: "fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
I0307 18:49:55.769617 26384 cri.go:87] found id: ""
I0307 18:49:55.769624 26384 logs.go:277] 1 containers: [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5]
I0307 18:49:55.769675 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:55.773846 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:49:55.773918 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:49:55.799820 26384 cri.go:87] found id: "33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
I0307 18:49:55.799844 26384 cri.go:87] found id: ""
I0307 18:49:55.799852 26384 logs.go:277] 1 containers: [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7]
I0307 18:49:55.799904 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:55.803655 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:49:55.803714 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:49:55.830795 26384 cri.go:87] found id: ""
I0307 18:49:55.830820 26384 logs.go:277] 0 containers: []
W0307 18:49:55.830829 26384 logs.go:279] No container was found matching "coredns"
I0307 18:49:55.830840 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:49:55.830892 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:49:55.861486 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:49:55.861511 26384 cri.go:87] found id: ""
I0307 18:49:55.861519 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:49:55.861571 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:55.865664 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:49:55.865712 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:49:55.892035 26384 cri.go:87] found id: ""
I0307 18:49:55.892057 26384 logs.go:277] 0 containers: []
W0307 18:49:55.892067 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:49:55.892074 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:49:55.892122 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:49:55.921473 26384 cri.go:87] found id: "1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
I0307 18:49:55.921491 26384 cri.go:87] found id: "476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
I0307 18:49:55.921503 26384 cri.go:87] found id: ""
I0307 18:49:55.921511 26384 logs.go:277] 2 containers: [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6]
I0307 18:49:55.921560 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:55.925654 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:55.929475 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:49:55.929539 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:49:55.956526 26384 cri.go:87] found id: ""
I0307 18:49:55.956559 26384 logs.go:277] 0 containers: []
W0307 18:49:55.956566 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:49:55.956571 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:49:55.956614 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:49:55.983852 26384 cri.go:87] found id: ""
I0307 18:49:55.983873 26384 logs.go:277] 0 containers: []
W0307 18:49:55.983879 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:49:55.983891 26384 logs.go:123] Gathering logs for kube-controller-manager [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611] ...
I0307 18:49:55.983905 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
I0307 18:49:56.013373 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:49:56.013404 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:49:56.075477 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:49:56.075514 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:49:56.134932 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:49:56.134953 26384 logs.go:123] Gathering logs for etcd [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7] ...
I0307 18:49:56.134963 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
I0307 18:49:56.162676 26384 logs.go:123] Gathering logs for kube-controller-manager [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6] ...
I0307 18:49:56.162702 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
I0307 18:49:56.205835 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:49:56.205864 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:49:56.254193 26384 logs.go:123] Gathering logs for container status ...
I0307 18:49:56.254226 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:49:56.291170 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:49:56.291199 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:49:56.303219 26384 logs.go:123] Gathering logs for kube-apiserver [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5] ...
I0307 18:49:56.303244 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
I0307 18:49:56.338501 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:49:56.338530 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:49:58.906800 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:49:58.907377 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:49:59.240745 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:49:59.240816 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:49:59.270117 26384 cri.go:87] found id: "fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
I0307 18:49:59.270138 26384 cri.go:87] found id: ""
I0307 18:49:59.270148 26384 logs.go:277] 1 containers: [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5]
I0307 18:49:59.270194 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:59.277486 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:49:59.277555 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:49:59.319990 26384 cri.go:87] found id: "33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
I0307 18:49:59.320008 26384 cri.go:87] found id: ""
I0307 18:49:59.320015 26384 logs.go:277] 1 containers: [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7]
I0307 18:49:59.320056 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:59.324577 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:49:59.324620 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:49:59.355279 26384 cri.go:87] found id: ""
I0307 18:49:59.355308 26384 logs.go:277] 0 containers: []
W0307 18:49:59.355318 26384 logs.go:279] No container was found matching "coredns"
I0307 18:49:59.355325 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:49:59.355383 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:49:59.385970 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:49:59.386019 26384 cri.go:87] found id: ""
I0307 18:49:59.386029 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:49:59.386084 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:59.389898 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:49:59.389957 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:49:59.418100 26384 cri.go:87] found id: ""
I0307 18:49:59.418123 26384 logs.go:277] 0 containers: []
W0307 18:49:59.418132 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:49:59.418141 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:49:59.418199 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:49:59.448963 26384 cri.go:87] found id: "1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
I0307 18:49:59.448984 26384 cri.go:87] found id: "476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
I0307 18:49:59.448990 26384 cri.go:87] found id: ""
I0307 18:49:59.448998 26384 logs.go:277] 2 containers: [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6]
I0307 18:49:59.449053 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:59.452973 26384 ssh_runner.go:195] Run: which crictl
I0307 18:49:59.456699 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:49:59.456745 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:49:59.487041 26384 cri.go:87] found id: ""
I0307 18:49:59.487066 26384 logs.go:277] 0 containers: []
W0307 18:49:59.487075 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:49:59.487081 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:49:59.487141 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:49:59.520702 26384 cri.go:87] found id: ""
I0307 18:49:59.520733 26384 logs.go:277] 0 containers: []
W0307 18:49:59.520744 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:49:59.520756 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:49:59.520770 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:49:59.534981 26384 logs.go:123] Gathering logs for kube-apiserver [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5] ...
I0307 18:49:59.535020 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
I0307 18:49:59.571150 26384 logs.go:123] Gathering logs for etcd [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7] ...
I0307 18:49:59.571176 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
I0307 18:49:59.608785 26384 logs.go:123] Gathering logs for kube-controller-manager [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6] ...
I0307 18:49:59.608815 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
W0307 18:49:59.635030 26384 logs.go:130] failed kube-controller-manager [476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6": Process exited with status 1
stdout:
stderr:
E0307 18:49:59.613980 2152 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6\": not found" containerID="476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
time="2023-03-07T18:49:59Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6\": not found"
output:
** stderr **
E0307 18:49:59.613980 2152 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6\": not found" containerID="476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6"
time="2023-03-07T18:49:59Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"476022ac461a7b7542fd6e6190d339e25d6c11daf5af4499506489e3be8686f6\": not found"
** /stderr **
I0307 18:49:59.635047 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:49:59.635057 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:49:59.681919 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:49:59.681947 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:49:59.738173 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:49:59.738205 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:49:59.789970 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:49:59.789991 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:49:59.790005 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:49:59.859269 26384 logs.go:123] Gathering logs for kube-controller-manager [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611] ...
I0307 18:49:59.859302 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
I0307 18:49:59.901677 26384 logs.go:123] Gathering logs for container status ...
I0307 18:49:59.901708 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:50:02.439332 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:50:07.439703 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0307 18:50:07.741227 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:50:07.741304 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:50:07.771935 26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:07.771958 26384 cri.go:87] found id: "fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
I0307 18:50:07.771964 26384 cri.go:87] found id: ""
I0307 18:50:07.771972 26384 logs.go:277] 2 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5]
I0307 18:50:07.772033 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:07.775931 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:07.779533 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:50:07.779583 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:50:07.807355 26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:07.807372 26384 cri.go:87] found id: "33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
I0307 18:50:07.807376 26384 cri.go:87] found id: ""
I0307 18:50:07.807382 26384 logs.go:277] 2 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7]
I0307 18:50:07.807423 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:07.810941 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:07.814428 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:50:07.814480 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:50:07.840502 26384 cri.go:87] found id: ""
I0307 18:50:07.840530 26384 logs.go:277] 0 containers: []
W0307 18:50:07.840537 26384 logs.go:279] No container was found matching "coredns"
I0307 18:50:07.840543 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:50:07.840590 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:50:07.872460 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:07.872482 26384 cri.go:87] found id: ""
I0307 18:50:07.872490 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:50:07.872532 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:07.876167 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:50:07.876234 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:50:07.902163 26384 cri.go:87] found id: ""
I0307 18:50:07.902185 26384 logs.go:277] 0 containers: []
W0307 18:50:07.902194 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:50:07.902203 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:50:07.902264 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:50:07.934206 26384 cri.go:87] found id: "1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
I0307 18:50:07.934234 26384 cri.go:87] found id: ""
I0307 18:50:07.934244 26384 logs.go:277] 1 containers: [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611]
I0307 18:50:07.934302 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:07.937973 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:50:07.938062 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:50:07.969362 26384 cri.go:87] found id: ""
I0307 18:50:07.969395 26384 logs.go:277] 0 containers: []
W0307 18:50:07.969406 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:50:07.969413 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:50:07.969476 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:50:07.996288 26384 cri.go:87] found id: ""
I0307 18:50:07.996313 26384 logs.go:277] 0 containers: []
W0307 18:50:07.996322 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:50:07.996332 26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
I0307 18:50:07.996346 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:08.022863 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:50:08.022893 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:50:08.072434 26384 logs.go:123] Gathering logs for container status ...
I0307 18:50:08.072467 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:50:08.110215 26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
I0307 18:50:08.110244 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:08.139123 26384 logs.go:123] Gathering logs for kube-apiserver [fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5] ...
I0307 18:50:08.139152 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe19f45550dd8faa81b51f1d0ab57dc5c7629b9fbf8aae248e190a08866c39e5"
I0307 18:50:08.172722 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:50:08.172748 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0307 18:50:22.210905 26384 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (14.038132901s)
W0307 18:50:22.210954 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:50:22.210963 26384 logs.go:123] Gathering logs for etcd [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7] ...
I0307 18:50:22.210973 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
W0307 18:50:22.243161 26384 logs.go:130] failed etcd [33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7": Process exited with status 1
stdout:
stderr:
E0307 18:50:22.230070 2359 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7\": not found" containerID="33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
time="2023-03-07T18:50:22Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7\": not found"
output:
** stderr **
E0307 18:50:22.230070 2359 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7\": not found" containerID="33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7"
time="2023-03-07T18:50:22Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"33f66ca8336d2075f19ec4afe15adad7a7cf67e3774dfcdb22ceae91d95af0c7\": not found"
** /stderr **
I0307 18:50:22.243182 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:50:22.243194 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:22.312610 26384 logs.go:123] Gathering logs for kube-controller-manager [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611] ...
I0307 18:50:22.312647 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
I0307 18:50:22.376483 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:50:22.376512 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:50:22.441347 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:50:22.441379 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:50:24.956249 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:50:24.956843 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:50:25.241295 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:50:25.241366 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:50:25.271038 26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:25.271057 26384 cri.go:87] found id: ""
I0307 18:50:25.271063 26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
I0307 18:50:25.271112 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:25.275131 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:50:25.275189 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:50:25.304102 26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:25.304122 26384 cri.go:87] found id: ""
I0307 18:50:25.304131 26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
I0307 18:50:25.304176 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:25.308112 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:50:25.308165 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:50:25.335593 26384 cri.go:87] found id: ""
I0307 18:50:25.335621 26384 logs.go:277] 0 containers: []
W0307 18:50:25.335631 26384 logs.go:279] No container was found matching "coredns"
I0307 18:50:25.335639 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:50:25.335696 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:50:25.366744 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:25.366765 26384 cri.go:87] found id: ""
I0307 18:50:25.366773 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:50:25.366814 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:25.370479 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:50:25.370523 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:50:25.397628 26384 cri.go:87] found id: ""
I0307 18:50:25.397651 26384 logs.go:277] 0 containers: []
W0307 18:50:25.397657 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:50:25.397662 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:50:25.397703 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:50:25.424370 26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:25.424388 26384 cri.go:87] found id: "1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
I0307 18:50:25.424392 26384 cri.go:87] found id: ""
I0307 18:50:25.424399 26384 logs.go:277] 2 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611]
I0307 18:50:25.424438 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:25.428375 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:25.432135 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:50:25.432197 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:50:25.464666 26384 cri.go:87] found id: ""
I0307 18:50:25.464686 26384 logs.go:277] 0 containers: []
W0307 18:50:25.464693 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:50:25.464698 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:50:25.464754 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:50:25.495748 26384 cri.go:87] found id: ""
I0307 18:50:25.495771 26384 logs.go:277] 0 containers: []
W0307 18:50:25.495778 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:50:25.495798 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:50:25.495816 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:50:25.552387 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:50:25.552409 26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
I0307 18:50:25.552419 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:25.585072 26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
I0307 18:50:25.585100 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:25.612624 26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
I0307 18:50:25.612652 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:25.642351 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:50:25.642375 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:50:25.696054 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:50:25.696080 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:50:25.759230 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:50:25.759261 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:50:25.771377 26384 logs.go:123] Gathering logs for container status ...
I0307 18:50:25.771400 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:50:25.814932 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:50:25.814958 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:25.880431 26384 logs.go:123] Gathering logs for kube-controller-manager [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611] ...
I0307 18:50:25.880462 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
I0307 18:50:28.429316 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:50:28.430023 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:50:28.740900 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:50:28.740981 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:50:28.771490 26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:28.771510 26384 cri.go:87] found id: ""
I0307 18:50:28.771517 26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
I0307 18:50:28.771573 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:28.775481 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:50:28.775544 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:50:28.803618 26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:28.803637 26384 cri.go:87] found id: ""
I0307 18:50:28.803644 26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
I0307 18:50:28.803682 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:28.807610 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:50:28.807656 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:50:28.837030 26384 cri.go:87] found id: ""
I0307 18:50:28.837048 26384 logs.go:277] 0 containers: []
W0307 18:50:28.837053 26384 logs.go:279] No container was found matching "coredns"
I0307 18:50:28.837058 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:50:28.837105 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:50:28.868318 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:28.868344 26384 cri.go:87] found id: ""
I0307 18:50:28.868353 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:50:28.868412 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:28.872041 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:50:28.872096 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:50:28.900155 26384 cri.go:87] found id: ""
I0307 18:50:28.900186 26384 logs.go:277] 0 containers: []
W0307 18:50:28.900195 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:50:28.900206 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:50:28.900266 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:50:28.928973 26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:28.929007 26384 cri.go:87] found id: "1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
I0307 18:50:28.929014 26384 cri.go:87] found id: ""
I0307 18:50:28.929022 26384 logs.go:277] 2 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611]
I0307 18:50:28.929080 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:28.932963 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:28.936674 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:50:28.936728 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:50:28.965932 26384 cri.go:87] found id: ""
I0307 18:50:28.965955 26384 logs.go:277] 0 containers: []
W0307 18:50:28.965965 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:50:28.965972 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:50:28.966027 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:50:28.996172 26384 cri.go:87] found id: ""
I0307 18:50:28.996202 26384 logs.go:277] 0 containers: []
W0307 18:50:28.996213 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:50:28.996230 26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
I0307 18:50:28.996252 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:29.027476 26384 logs.go:123] Gathering logs for kube-controller-manager [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611] ...
I0307 18:50:29.027505 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
I0307 18:50:29.068982 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:50:29.069007 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:50:29.123121 26384 logs.go:123] Gathering logs for container status ...
I0307 18:50:29.123155 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:50:29.154965 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:50:29.154990 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:50:29.221021 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:50:29.221051 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:50:29.275777 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:50:29.275800 26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
I0307 18:50:29.275817 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:29.305802 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:50:29.305836 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:29.374935 26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
I0307 18:50:29.374971 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:29.404375 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:50:29.404401 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:50:31.916470 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:50:31.917095 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:50:32.241577 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:50:32.241647 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:50:32.273069 26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:32.273102 26384 cri.go:87] found id: ""
I0307 18:50:32.273108 26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
I0307 18:50:32.273164 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:32.277800 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:50:32.277842 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:50:32.312694 26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:32.312722 26384 cri.go:87] found id: ""
I0307 18:50:32.312732 26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
I0307 18:50:32.312778 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:32.316764 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:50:32.316809 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:50:32.348032 26384 cri.go:87] found id: ""
I0307 18:50:32.348049 26384 logs.go:277] 0 containers: []
W0307 18:50:32.348054 26384 logs.go:279] No container was found matching "coredns"
I0307 18:50:32.348059 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:50:32.348116 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:50:32.382261 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:32.382286 26384 cri.go:87] found id: ""
I0307 18:50:32.382297 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:50:32.382355 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:32.386519 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:50:32.386583 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:50:32.423869 26384 cri.go:87] found id: ""
I0307 18:50:32.423890 26384 logs.go:277] 0 containers: []
W0307 18:50:32.423897 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:50:32.423902 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:50:32.423964 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:50:32.461514 26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:32.461538 26384 cri.go:87] found id: "1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
I0307 18:50:32.461545 26384 cri.go:87] found id: ""
I0307 18:50:32.461553 26384 logs.go:277] 2 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611]
I0307 18:50:32.461606 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:32.465604 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:32.469437 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:50:32.469474 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:50:32.507355 26384 cri.go:87] found id: ""
I0307 18:50:32.507376 26384 logs.go:277] 0 containers: []
W0307 18:50:32.507388 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:50:32.507395 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:50:32.507451 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:50:32.545202 26384 cri.go:87] found id: ""
I0307 18:50:32.545230 26384 logs.go:277] 0 containers: []
W0307 18:50:32.545240 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:50:32.545257 26384 logs.go:123] Gathering logs for kube-controller-manager [1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611] ...
I0307 18:50:32.545270 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f6b0c8eb4d062e0b3cfc602c0f3cbaab0df2bda4f0f0e737994f0e13e869611"
I0307 18:50:32.598969 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:50:32.598996 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:50:32.666940 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:50:32.666972 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:50:32.724486 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:50:32.724506 26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
I0307 18:50:32.724516 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:32.758363 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:50:32.758389 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:32.838189 26384 logs.go:123] Gathering logs for container status ...
I0307 18:50:32.838228 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:50:32.891708 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:50:32.891740 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:50:32.903720 26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
I0307 18:50:32.903746 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:32.936722 26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
I0307 18:50:32.936745 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:32.969027 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:50:32.969055 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:50:35.524418 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:50:35.525031 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:50:35.741445 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:50:35.741534 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:50:35.771644 26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:35.771665 26384 cri.go:87] found id: ""
I0307 18:50:35.771673 26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
I0307 18:50:35.771733 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:35.775944 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:50:35.776002 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:50:35.807438 26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:35.807455 26384 cri.go:87] found id: ""
I0307 18:50:35.807464 26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
I0307 18:50:35.807512 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:35.811521 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:50:35.811577 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:50:35.839719 26384 cri.go:87] found id: ""
I0307 18:50:35.839739 26384 logs.go:277] 0 containers: []
W0307 18:50:35.839746 26384 logs.go:279] No container was found matching "coredns"
I0307 18:50:35.839751 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:50:35.839801 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:50:35.870068 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:35.870089 26384 cri.go:87] found id: ""
I0307 18:50:35.870096 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:50:35.870139 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:35.873953 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:50:35.874009 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:50:35.907548 26384 cri.go:87] found id: ""
I0307 18:50:35.907576 26384 logs.go:277] 0 containers: []
W0307 18:50:35.907584 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:50:35.907589 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:50:35.907648 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:50:35.938809 26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:35.938828 26384 cri.go:87] found id: ""
I0307 18:50:35.938834 26384 logs.go:277] 1 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
I0307 18:50:35.938888 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:35.943995 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:50:35.944045 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:50:35.971387 26384 cri.go:87] found id: ""
I0307 18:50:35.971406 26384 logs.go:277] 0 containers: []
W0307 18:50:35.971413 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:50:35.971420 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:50:35.971470 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:50:35.998911 26384 cri.go:87] found id: ""
I0307 18:50:35.998938 26384 logs.go:277] 0 containers: []
W0307 18:50:35.998965 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:50:35.998982 26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
I0307 18:50:35.999012 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:36.038815 26384 logs.go:123] Gathering logs for container status ...
I0307 18:50:36.038848 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:50:36.077044 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:50:36.077071 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:50:36.129558 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:50:36.129591 26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
I0307 18:50:36.129604 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:36.166935 26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
I0307 18:50:36.166960 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:36.195852 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:50:36.195882 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:36.271088 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:50:36.271123 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:50:36.326628 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:50:36.326662 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:50:36.389379 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:50:36.389411 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:50:38.901954 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:50:38.902491 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:50:39.240923 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:50:39.241009 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:50:39.271083 26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:39.271107 26384 cri.go:87] found id: ""
I0307 18:50:39.271116 26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
I0307 18:50:39.271171 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:39.275511 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:50:39.275567 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:50:39.306601 26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:39.306618 26384 cri.go:87] found id: ""
I0307 18:50:39.306625 26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
I0307 18:50:39.306672 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:39.311169 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:50:39.311223 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:50:39.341921 26384 cri.go:87] found id: ""
I0307 18:50:39.341940 26384 logs.go:277] 0 containers: []
W0307 18:50:39.341945 26384 logs.go:279] No container was found matching "coredns"
I0307 18:50:39.341951 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:50:39.342005 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:50:39.370475 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:39.370499 26384 cri.go:87] found id: ""
I0307 18:50:39.370509 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:50:39.370560 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:39.374423 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:50:39.374480 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:50:39.404780 26384 cri.go:87] found id: ""
I0307 18:50:39.404801 26384 logs.go:277] 0 containers: []
W0307 18:50:39.404809 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:50:39.404819 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:50:39.404877 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:50:39.435660 26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:39.435684 26384 cri.go:87] found id: ""
I0307 18:50:39.435692 26384 logs.go:277] 1 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
I0307 18:50:39.435746 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:39.439799 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:50:39.439857 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:50:39.468225 26384 cri.go:87] found id: ""
I0307 18:50:39.468250 26384 logs.go:277] 0 containers: []
W0307 18:50:39.468259 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:50:39.468267 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:50:39.468325 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:50:39.500922 26384 cri.go:87] found id: ""
I0307 18:50:39.500949 26384 logs.go:277] 0 containers: []
W0307 18:50:39.500958 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:50:39.500982 26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
I0307 18:50:39.500995 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:39.530882 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:50:39.530921 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:39.600657 26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
I0307 18:50:39.600685 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:39.649285 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:50:39.649317 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:50:39.697957 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:50:39.697989 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:50:39.759513 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:50:39.759544 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:50:39.772345 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:50:39.772373 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:50:39.831389 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:50:39.831411 26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
I0307 18:50:39.831421 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:39.864274 26384 logs.go:123] Gathering logs for container status ...
I0307 18:50:39.864314 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:50:42.400891 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:50:42.401466 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:50:42.740872 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:50:42.740939 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:50:42.768431 26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:42.768453 26384 cri.go:87] found id: ""
I0307 18:50:42.768460 26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
I0307 18:50:42.768513 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:42.772288 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:50:42.772331 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:50:42.798526 26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:42.798553 26384 cri.go:87] found id: ""
I0307 18:50:42.798562 26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
I0307 18:50:42.798603 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:42.802234 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:50:42.802282 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:50:42.828743 26384 cri.go:87] found id: ""
I0307 18:50:42.828762 26384 logs.go:277] 0 containers: []
W0307 18:50:42.828769 26384 logs.go:279] No container was found matching "coredns"
I0307 18:50:42.828774 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:50:42.828825 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:50:42.856471 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:42.856494 26384 cri.go:87] found id: ""
I0307 18:50:42.856501 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:50:42.856546 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:42.860506 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:50:42.860571 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:50:42.886392 26384 cri.go:87] found id: ""
I0307 18:50:42.886416 26384 logs.go:277] 0 containers: []
W0307 18:50:42.886423 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:50:42.886428 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:50:42.886474 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:50:42.913452 26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:42.913478 26384 cri.go:87] found id: ""
I0307 18:50:42.913487 26384 logs.go:277] 1 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
I0307 18:50:42.913532 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:42.917323 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:50:42.917383 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:50:42.943946 26384 cri.go:87] found id: ""
I0307 18:50:42.943964 26384 logs.go:277] 0 containers: []
W0307 18:50:42.943970 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:50:42.943975 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:50:42.944025 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:50:42.969863 26384 cri.go:87] found id: ""
I0307 18:50:42.969888 26384 logs.go:277] 0 containers: []
W0307 18:50:42.969896 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:50:42.969927 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:50:42.969944 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:50:43.027701 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:50:43.027737 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:50:43.041018 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:50:43.041051 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:50:43.090630 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:50:43.090658 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:50:43.090670 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:43.162692 26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
I0307 18:50:43.162728 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:43.208000 26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
I0307 18:50:43.208025 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:43.241826 26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
I0307 18:50:43.241853 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:43.272472 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:50:43.272497 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:50:43.323281 26384 logs.go:123] Gathering logs for container status ...
I0307 18:50:43.323311 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:50:45.854952 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:50:45.855553 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:50:46.241035 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:50:46.241121 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:50:46.274554 26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:46.274576 26384 cri.go:87] found id: ""
I0307 18:50:46.274583 26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
I0307 18:50:46.274637 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:46.278942 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:50:46.278994 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:50:46.307295 26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:46.307313 26384 cri.go:87] found id: ""
I0307 18:50:46.307320 26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
I0307 18:50:46.307363 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:46.311114 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:50:46.311163 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:50:46.341762 26384 cri.go:87] found id: ""
I0307 18:50:46.341780 26384 logs.go:277] 0 containers: []
W0307 18:50:46.341787 26384 logs.go:279] No container was found matching "coredns"
I0307 18:50:46.341792 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:50:46.341852 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:50:46.374164 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:46.374187 26384 cri.go:87] found id: ""
I0307 18:50:46.374196 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:50:46.374252 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:46.378131 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:50:46.378201 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:50:46.406158 26384 cri.go:87] found id: ""
I0307 18:50:46.406176 26384 logs.go:277] 0 containers: []
W0307 18:50:46.406182 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:50:46.406188 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:50:46.406230 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:50:46.434896 26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:46.434922 26384 cri.go:87] found id: ""
I0307 18:50:46.434931 26384 logs.go:277] 1 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
I0307 18:50:46.434985 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:46.438785 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:50:46.438842 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:50:46.469078 26384 cri.go:87] found id: ""
I0307 18:50:46.469100 26384 logs.go:277] 0 containers: []
W0307 18:50:46.469107 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:50:46.469113 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:50:46.469178 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:50:46.500068 26384 cri.go:87] found id: ""
I0307 18:50:46.500096 26384 logs.go:277] 0 containers: []
W0307 18:50:46.500105 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:50:46.500117 26384 logs.go:123] Gathering logs for container status ...
I0307 18:50:46.500128 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:50:46.537674 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:50:46.537702 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:50:46.599647 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:50:46.599677 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:50:46.611626 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:50:46.611656 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:50:46.664489 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:50:46.664513 26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
I0307 18:50:46.664526 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:46.698473 26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
I0307 18:50:46.698501 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:46.730118 26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
I0307 18:50:46.730147 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:46.777380 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:50:46.777407 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:50:46.827387 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:50:46.827416 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:49.400363 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:50:49.400915 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:50:49.741647 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:50:49.741733 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:50:49.774027 26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:49.774056 26384 cri.go:87] found id: ""
I0307 18:50:49.774065 26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
I0307 18:50:49.774123 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:49.778228 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:50:49.778286 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:50:49.807806 26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:49.807832 26384 cri.go:87] found id: ""
I0307 18:50:49.807841 26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
I0307 18:50:49.807884 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:49.811537 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:50:49.811584 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:50:49.839443 26384 cri.go:87] found id: ""
I0307 18:50:49.839468 26384 logs.go:277] 0 containers: []
W0307 18:50:49.839477 26384 logs.go:279] No container was found matching "coredns"
I0307 18:50:49.839485 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:50:49.839543 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:50:49.868206 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:49.868225 26384 cri.go:87] found id: ""
I0307 18:50:49.868232 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:50:49.868273 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:49.871988 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:50:49.872029 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:50:49.903763 26384 cri.go:87] found id: ""
I0307 18:50:49.903790 26384 logs.go:277] 0 containers: []
W0307 18:50:49.903802 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:50:49.903809 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:50:49.903869 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:50:49.931386 26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:49.931408 26384 cri.go:87] found id: ""
I0307 18:50:49.931417 26384 logs.go:277] 1 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
I0307 18:50:49.931470 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:49.935416 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:50:49.935472 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:50:49.964413 26384 cri.go:87] found id: ""
I0307 18:50:49.964442 26384 logs.go:277] 0 containers: []
W0307 18:50:49.964451 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:50:49.964457 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:50:49.964519 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:50:49.995371 26384 cri.go:87] found id: ""
I0307 18:50:49.995400 26384 logs.go:277] 0 containers: []
W0307 18:50:49.995410 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:50:49.995428 26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
I0307 18:50:49.995443 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:50.027383 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:50:50.027415 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:50.102948 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:50:50.102987 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:50:50.153563 26384 logs.go:123] Gathering logs for container status ...
I0307 18:50:50.153595 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:50:50.187209 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:50:50.187240 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:50:50.252908 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:50:50.252940 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:50:50.265236 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:50:50.265260 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:50:50.319484 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:50:50.319506 26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
I0307 18:50:50.319518 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:50.349093 26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
I0307 18:50:50.349119 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:52.888932 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:50:52.889665 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:50:53.241383 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:50:53.241454 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:50:53.270824 26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:53.270844 26384 cri.go:87] found id: ""
I0307 18:50:53.270851 26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
I0307 18:50:53.270903 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:53.274602 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:50:53.274642 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:50:53.307455 26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:53.307483 26384 cri.go:87] found id: ""
I0307 18:50:53.307492 26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
I0307 18:50:53.307545 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:53.311591 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:50:53.311651 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:50:53.339718 26384 cri.go:87] found id: ""
I0307 18:50:53.339742 26384 logs.go:277] 0 containers: []
W0307 18:50:53.339751 26384 logs.go:279] No container was found matching "coredns"
I0307 18:50:53.339758 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:50:53.339811 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:50:53.369697 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:53.369729 26384 cri.go:87] found id: ""
I0307 18:50:53.369739 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:50:53.369781 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:53.373719 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:50:53.373782 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:50:53.401736 26384 cri.go:87] found id: ""
I0307 18:50:53.401754 26384 logs.go:277] 0 containers: []
W0307 18:50:53.401760 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:50:53.401764 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:50:53.401823 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:50:53.432212 26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:53.432236 26384 cri.go:87] found id: ""
I0307 18:50:53.432244 26384 logs.go:277] 1 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
I0307 18:50:53.432301 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:53.436390 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:50:53.436449 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:50:53.465471 26384 cri.go:87] found id: ""
I0307 18:50:53.465500 26384 logs.go:277] 0 containers: []
W0307 18:50:53.465518 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:50:53.465525 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:50:53.465583 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:50:53.493404 26384 cri.go:87] found id: ""
I0307 18:50:53.493431 26384 logs.go:277] 0 containers: []
W0307 18:50:53.493440 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:50:53.493455 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:50:53.493468 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:50:53.556791 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:50:53.556823 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:50:53.568973 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:50:53.568992 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:50:53.621325 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:50:53.621345 26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
I0307 18:50:53.621356 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:53.662717 26384 logs.go:123] Gathering logs for container status ...
I0307 18:50:53.662744 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:50:53.693831 26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
I0307 18:50:53.693855 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:53.731078 26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
I0307 18:50:53.731104 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:53.759392 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:50:53.759416 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:53.827438 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:50:53.827472 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:50:56.380799 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:50:56.381488 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:50:56.740948 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:50:56.741023 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:50:56.777942 26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:56.777966 26384 cri.go:87] found id: ""
I0307 18:50:56.777977 26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
I0307 18:50:56.778023 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:56.782180 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:50:56.782230 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:50:56.810835 26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:56.810861 26384 cri.go:87] found id: ""
I0307 18:50:56.810870 26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
I0307 18:50:56.810916 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:56.814853 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:50:56.814919 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:50:56.842426 26384 cri.go:87] found id: ""
I0307 18:50:56.842451 26384 logs.go:277] 0 containers: []
W0307 18:50:56.842459 26384 logs.go:279] No container was found matching "coredns"
I0307 18:50:56.842465 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:50:56.842517 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:50:56.877177 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:56.877204 26384 cri.go:87] found id: ""
I0307 18:50:56.877212 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:50:56.877269 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:56.881405 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:50:56.881477 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:50:56.913559 26384 cri.go:87] found id: ""
I0307 18:50:56.913584 26384 logs.go:277] 0 containers: []
W0307 18:50:56.913594 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:50:56.913602 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:50:56.913659 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:50:56.941955 26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:56.941979 26384 cri.go:87] found id: ""
I0307 18:50:56.941987 26384 logs.go:277] 1 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
I0307 18:50:56.942045 26384 ssh_runner.go:195] Run: which crictl
I0307 18:50:56.946194 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:50:56.946260 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:50:56.978326 26384 cri.go:87] found id: ""
I0307 18:50:56.978349 26384 logs.go:277] 0 containers: []
W0307 18:50:56.978355 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:50:56.978361 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:50:56.978420 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:50:57.007950 26384 cri.go:87] found id: ""
I0307 18:50:57.007973 26384 logs.go:277] 0 containers: []
W0307 18:50:57.007979 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:50:57.007990 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:50:57.008004 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:50:57.079815 26384 logs.go:123] Gathering logs for container status ...
I0307 18:50:57.079853 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:50:57.120095 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:50:57.120125 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:50:57.180846 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:50:57.180881 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:50:57.193148 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:50:57.193171 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:50:57.246199 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:50:57.246224 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:50:57.246238 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:50:57.299491 26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
I0307 18:50:57.299528 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:50:57.335019 26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
I0307 18:50:57.335052 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:50:57.363632 26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
I0307 18:50:57.363662 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:50:59.901204 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:50:59.901827 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:51:00.241273 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:51:00.241359 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:51:00.271191 26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:51:00.271210 26384 cri.go:87] found id: ""
I0307 18:51:00.271217 26384 logs.go:277] 1 containers: [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
I0307 18:51:00.271260 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:00.276060 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:51:00.276095 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:51:00.313616 26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:51:00.313635 26384 cri.go:87] found id: ""
I0307 18:51:00.313642 26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
I0307 18:51:00.313691 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:00.317695 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:51:00.317746 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:51:00.354185 26384 cri.go:87] found id: ""
I0307 18:51:00.354202 26384 logs.go:277] 0 containers: []
W0307 18:51:00.354210 26384 logs.go:279] No container was found matching "coredns"
I0307 18:51:00.354217 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:51:00.354272 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:51:00.388615 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:00.388637 26384 cri.go:87] found id: ""
I0307 18:51:00.388646 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:51:00.388708 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:00.392706 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:51:00.392764 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:51:00.419909 26384 cri.go:87] found id: ""
I0307 18:51:00.419930 26384 logs.go:277] 0 containers: []
W0307 18:51:00.419937 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:51:00.419942 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:51:00.419989 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:51:00.448896 26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:51:00.448921 26384 cri.go:87] found id: ""
I0307 18:51:00.448929 26384 logs.go:277] 1 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
I0307 18:51:00.448982 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:00.452787 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:51:00.452848 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:51:00.482963 26384 cri.go:87] found id: ""
I0307 18:51:00.482983 26384 logs.go:277] 0 containers: []
W0307 18:51:00.482989 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:51:00.482994 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:51:00.483049 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:51:00.510864 26384 cri.go:87] found id: ""
I0307 18:51:00.510894 26384 logs.go:277] 0 containers: []
W0307 18:51:00.510905 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:51:00.510922 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:51:00.510938 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:00.584622 26384 logs.go:123] Gathering logs for container status ...
I0307 18:51:00.584656 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:51:00.620966 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:51:00.620997 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:51:00.633989 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:51:00.634015 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:51:00.685115 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:51:00.685136 26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
I0307 18:51:00.685145 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:51:00.722939 26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
I0307 18:51:00.722971 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:51:00.751368 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:51:00.751399 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:51:00.814202 26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
I0307 18:51:00.814234 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:51:00.855965 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:51:00.855990 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:51:03.406623 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:51:03.407166 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:51:03.740702 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:51:03.740777 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:51:03.774539 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:03.774560 26384 cri.go:87] found id: "1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:51:03.774567 26384 cri.go:87] found id: ""
I0307 18:51:03.774575 26384 logs.go:277] 2 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed]
I0307 18:51:03.774639 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:03.778696 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:03.782771 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:51:03.782817 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:51:03.818150 26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:51:03.818173 26384 cri.go:87] found id: ""
I0307 18:51:03.818182 26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
I0307 18:51:03.818226 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:03.822385 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:51:03.822442 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:51:03.855669 26384 cri.go:87] found id: ""
I0307 18:51:03.855697 26384 logs.go:277] 0 containers: []
W0307 18:51:03.855706 26384 logs.go:279] No container was found matching "coredns"
I0307 18:51:03.855713 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:51:03.855765 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:51:03.888270 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:03.888297 26384 cri.go:87] found id: ""
I0307 18:51:03.888304 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:51:03.888346 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:03.892269 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:51:03.892332 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:51:03.920187 26384 cri.go:87] found id: ""
I0307 18:51:03.920221 26384 logs.go:277] 0 containers: []
W0307 18:51:03.920232 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:51:03.920239 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:51:03.920296 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:51:03.953587 26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:51:03.953613 26384 cri.go:87] found id: ""
I0307 18:51:03.953620 26384 logs.go:277] 1 containers: [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
I0307 18:51:03.953664 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:03.957799 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:51:03.957864 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:51:03.990134 26384 cri.go:87] found id: ""
I0307 18:51:03.990163 26384 logs.go:277] 0 containers: []
W0307 18:51:03.990173 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:51:03.990180 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:51:03.990252 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:51:04.027162 26384 cri.go:87] found id: ""
I0307 18:51:04.027193 26384 logs.go:277] 0 containers: []
W0307 18:51:04.027203 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:51:04.027222 26384 logs.go:123] Gathering logs for kube-apiserver [1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed] ...
I0307 18:51:04.027242 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d8cc825e2e2c80bc2796b69d6eecaa07db5a7e3dd0959a6d4432a5315f06aed"
I0307 18:51:04.067517 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:51:04.067549 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:04.149401 26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
I0307 18:51:04.149431 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:51:04.193745 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:51:04.193773 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:51:04.255156 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:51:04.255194 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:51:04.273611 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:51:04.273640 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0307 18:51:25.368122 26384 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (21.094454524s)
W0307 18:51:25.368169 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:51:25.368184 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:51:25.368198 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:25.400867 26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
I0307 18:51:25.400894 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:51:25.431796 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:51:25.431828 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:51:25.487683 26384 logs.go:123] Gathering logs for container status ...
I0307 18:51:25.487715 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:51:28.026074 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:51:28.026610 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:51:28.241444 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:51:28.241526 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:51:28.274761 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:28.274787 26384 cri.go:87] found id: ""
I0307 18:51:28.274794 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:51:28.274855 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:28.279831 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:51:28.279890 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:51:28.313516 26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:51:28.313534 26384 cri.go:87] found id: ""
I0307 18:51:28.313546 26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
I0307 18:51:28.313588 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:28.317666 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:51:28.317719 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:51:28.347101 26384 cri.go:87] found id: ""
I0307 18:51:28.347124 26384 logs.go:277] 0 containers: []
W0307 18:51:28.347131 26384 logs.go:279] No container was found matching "coredns"
I0307 18:51:28.347136 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:51:28.347198 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:51:28.378300 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:28.378320 26384 cri.go:87] found id: ""
I0307 18:51:28.378326 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:51:28.378377 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:28.382695 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:51:28.382753 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:51:28.410959 26384 cri.go:87] found id: ""
I0307 18:51:28.410981 26384 logs.go:277] 0 containers: []
W0307 18:51:28.410988 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:51:28.410995 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:51:28.411048 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:51:28.441806 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:51:28.441826 26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:51:28.441833 26384 cri.go:87] found id: ""
I0307 18:51:28.441842 26384 logs.go:277] 2 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
I0307 18:51:28.441892 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:28.446211 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:28.450221 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:51:28.450282 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:51:28.483257 26384 cri.go:87] found id: ""
I0307 18:51:28.483279 26384 logs.go:277] 0 containers: []
W0307 18:51:28.483286 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:51:28.483292 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:51:28.483358 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:51:28.510972 26384 cri.go:87] found id: ""
I0307 18:51:28.510998 26384 logs.go:277] 0 containers: []
W0307 18:51:28.511008 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:51:28.511026 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:51:28.511044 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:51:28.524745 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:51:28.524776 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:51:28.578288 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:51:28.578311 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:51:28.578323 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:28.611345 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:51:28.611382 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:28.683142 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:51:28.683180 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:51:28.713237 26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
I0307 18:51:28.713266 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:51:28.751528 26384 logs.go:123] Gathering logs for container status ...
I0307 18:51:28.751554 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:51:28.789824 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:51:28.789849 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:51:28.849258 26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
I0307 18:51:28.849288 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:51:28.881741 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:51:28.881766 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:51:31.435018 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:51:31.435708 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:51:31.741199 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:51:31.741275 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:51:31.775567 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:31.775595 26384 cri.go:87] found id: ""
I0307 18:51:31.775603 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:51:31.775660 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:31.779786 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:51:31.779843 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:51:31.811197 26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:51:31.811217 26384 cri.go:87] found id: ""
I0307 18:51:31.811225 26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
I0307 18:51:31.811279 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:31.815320 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:51:31.815380 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:51:31.844870 26384 cri.go:87] found id: ""
I0307 18:51:31.844898 26384 logs.go:277] 0 containers: []
W0307 18:51:31.844907 26384 logs.go:279] No container was found matching "coredns"
I0307 18:51:31.844915 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:51:31.844992 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:51:31.872742 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:31.872765 26384 cri.go:87] found id: ""
I0307 18:51:31.872779 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:51:31.872834 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:31.876867 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:51:31.876935 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:51:31.903271 26384 cri.go:87] found id: ""
I0307 18:51:31.903299 26384 logs.go:277] 0 containers: []
W0307 18:51:31.903306 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:51:31.903311 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:51:31.903361 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:51:31.930122 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:51:31.930143 26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:51:31.930147 26384 cri.go:87] found id: ""
I0307 18:51:31.930153 26384 logs.go:277] 2 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
I0307 18:51:31.930194 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:31.933837 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:31.937392 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:51:31.937451 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:51:31.963795 26384 cri.go:87] found id: ""
I0307 18:51:31.963818 26384 logs.go:277] 0 containers: []
W0307 18:51:31.963824 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:51:31.963830 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:51:31.963871 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:51:31.997078 26384 cri.go:87] found id: ""
I0307 18:51:31.997101 26384 logs.go:277] 0 containers: []
W0307 18:51:31.997107 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:51:31.997119 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:51:31.997133 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:32.085403 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:51:32.085436 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:51:32.115532 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:51:32.115557 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:51:32.171653 26384 logs.go:123] Gathering logs for container status ...
I0307 18:51:32.171688 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:51:32.204332 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:51:32.204361 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:51:32.216172 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:51:32.216197 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:51:32.266551 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:51:32.266575 26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
I0307 18:51:32.266593 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:51:32.297132 26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
I0307 18:51:32.297159 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:51:32.344077 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:51:32.344105 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:51:32.403948 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:51:32.403977 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:34.935152 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:51:34.935872 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:51:35.241335 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:51:35.241407 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:51:35.270388 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:35.270412 26384 cri.go:87] found id: ""
I0307 18:51:35.270418 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:51:35.270468 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:35.275051 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:51:35.275114 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:51:35.304925 26384 cri.go:87] found id: "28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:51:35.304971 26384 cri.go:87] found id: ""
I0307 18:51:35.304979 26384 logs.go:277] 1 containers: [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10]
I0307 18:51:35.305030 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:35.308987 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:51:35.309043 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:51:35.334992 26384 cri.go:87] found id: ""
I0307 18:51:35.335015 26384 logs.go:277] 0 containers: []
W0307 18:51:35.335024 26384 logs.go:279] No container was found matching "coredns"
I0307 18:51:35.335031 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:51:35.335078 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:51:35.363029 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:35.363054 26384 cri.go:87] found id: ""
I0307 18:51:35.363062 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:51:35.363112 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:35.366976 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:51:35.367027 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:51:35.393011 26384 cri.go:87] found id: ""
I0307 18:51:35.393033 26384 logs.go:277] 0 containers: []
W0307 18:51:35.393040 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:51:35.393046 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:51:35.393089 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:51:35.418706 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:51:35.418731 26384 cri.go:87] found id: "75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:51:35.418738 26384 cri.go:87] found id: ""
I0307 18:51:35.418746 26384 logs.go:277] 2 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7]
I0307 18:51:35.418795 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:35.422711 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:35.426344 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:51:35.426404 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:51:35.453517 26384 cri.go:87] found id: ""
I0307 18:51:35.453540 26384 logs.go:277] 0 containers: []
W0307 18:51:35.453547 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:51:35.453552 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:51:35.453600 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:51:35.480473 26384 cri.go:87] found id: ""
I0307 18:51:35.480506 26384 logs.go:277] 0 containers: []
W0307 18:51:35.480535 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:51:35.480557 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:51:35.480572 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:35.514397 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:51:35.514430 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:51:35.553507 26384 logs.go:123] Gathering logs for kube-controller-manager [75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7] ...
I0307 18:51:35.553543 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75a673b46eb8570cc53220ecca651d0f96c37720a38df075d1b6b81b881d06b7"
I0307 18:51:35.594291 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:51:35.594323 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:51:35.649916 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:51:35.649950 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:51:35.708932 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:51:35.708962 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:51:35.720655 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:51:35.720682 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:51:35.775147 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:51:35.775170 26384 logs.go:123] Gathering logs for etcd [28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10] ...
I0307 18:51:35.775185 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28a2d1c211158879b4b3baa80fa81e9cebe64ddb83141bb6b8b28b9274581c10"
I0307 18:51:35.808353 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:51:35.808378 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:35.888351 26384 logs.go:123] Gathering logs for container status ...
I0307 18:51:35.888387 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:51:38.421085 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:51:38.421679 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:51:38.741179 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:51:38.741264 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:51:38.771512 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:38.771541 26384 cri.go:87] found id: ""
I0307 18:51:38.771552 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:51:38.771608 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:38.775448 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:51:38.775518 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:51:38.803713 26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:51:38.803738 26384 cri.go:87] found id: ""
I0307 18:51:38.803746 26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
I0307 18:51:38.803797 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:38.807432 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:51:38.807485 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:51:38.841539 26384 cri.go:87] found id: ""
I0307 18:51:38.841564 26384 logs.go:277] 0 containers: []
W0307 18:51:38.841572 26384 logs.go:279] No container was found matching "coredns"
I0307 18:51:38.841580 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:51:38.841700 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:51:38.873163 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:38.873189 26384 cri.go:87] found id: ""
I0307 18:51:38.873197 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:51:38.873244 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:38.876827 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:51:38.876887 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:51:38.904500 26384 cri.go:87] found id: ""
I0307 18:51:38.904525 26384 logs.go:277] 0 containers: []
W0307 18:51:38.904535 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:51:38.904541 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:51:38.904605 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:51:38.933684 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:51:38.933703 26384 cri.go:87] found id: ""
I0307 18:51:38.933708 26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
I0307 18:51:38.933753 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:38.937611 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:51:38.937673 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:51:38.967298 26384 cri.go:87] found id: ""
I0307 18:51:38.967317 26384 logs.go:277] 0 containers: []
W0307 18:51:38.967323 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:51:38.967329 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:51:38.967381 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:51:38.994836 26384 cri.go:87] found id: ""
I0307 18:51:38.994857 26384 logs.go:277] 0 containers: []
W0307 18:51:38.994864 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:51:38.994875 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:51:38.994885 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:51:39.013172 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:51:39.013202 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:39.050550 26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
I0307 18:51:39.050577 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:51:39.081654 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:51:39.081686 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:51:39.122178 26384 logs.go:123] Gathering logs for container status ...
I0307 18:51:39.122206 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:51:39.157534 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:51:39.157558 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:51:39.215607 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:51:39.215638 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:51:39.270533 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:51:39.270555 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:51:39.270565 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:39.351014 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:51:39.351046 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:51:41.910810 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:51:41.911444 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:51:42.240866 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:51:42.240934 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:51:42.270659 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:42.270686 26384 cri.go:87] found id: ""
I0307 18:51:42.270693 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:51:42.270744 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:42.274956 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:51:42.275009 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:51:42.302640 26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:51:42.302659 26384 cri.go:87] found id: ""
I0307 18:51:42.302666 26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
I0307 18:51:42.302708 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:42.306628 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:51:42.306683 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:51:42.333725 26384 cri.go:87] found id: ""
I0307 18:51:42.333744 26384 logs.go:277] 0 containers: []
W0307 18:51:42.333750 26384 logs.go:279] No container was found matching "coredns"
I0307 18:51:42.333757 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:51:42.333797 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:51:42.361433 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:42.361455 26384 cri.go:87] found id: ""
I0307 18:51:42.361461 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:51:42.361525 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:42.365419 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:51:42.365475 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:51:42.390359 26384 cri.go:87] found id: ""
I0307 18:51:42.390386 26384 logs.go:277] 0 containers: []
W0307 18:51:42.390394 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:51:42.390400 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:51:42.390466 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:51:42.418877 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:51:42.418900 26384 cri.go:87] found id: ""
I0307 18:51:42.418909 26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
I0307 18:51:42.418961 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:42.422852 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:51:42.422922 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:51:42.449901 26384 cri.go:87] found id: ""
I0307 18:51:42.449937 26384 logs.go:277] 0 containers: []
W0307 18:51:42.449947 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:51:42.449953 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:51:42.450013 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:51:42.478218 26384 cri.go:87] found id: ""
I0307 18:51:42.478243 26384 logs.go:277] 0 containers: []
W0307 18:51:42.478251 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:51:42.478269 26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
I0307 18:51:42.478286 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:51:42.506655 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:51:42.506700 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:42.582409 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:51:42.582444 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:42.615907 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:51:42.615931 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:51:42.657529 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:51:42.657560 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:51:42.712843 26384 logs.go:123] Gathering logs for container status ...
I0307 18:51:42.712871 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:51:42.745993 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:51:42.746017 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:51:42.808149 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:51:42.808182 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:51:42.820414 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:51:42.820435 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:51:42.873183 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:51:45.374057 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:51:45.374585 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:51:45.741047 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:51:45.741134 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:51:45.770908 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:45.770936 26384 cri.go:87] found id: ""
I0307 18:51:45.770944 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:51:45.771001 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:45.775199 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:51:45.775271 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:51:45.804540 26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:51:45.804560 26384 cri.go:87] found id: ""
I0307 18:51:45.804567 26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
I0307 18:51:45.804609 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:45.808609 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:51:45.808686 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:51:45.835602 26384 cri.go:87] found id: ""
I0307 18:51:45.835627 26384 logs.go:277] 0 containers: []
W0307 18:51:45.835635 26384 logs.go:279] No container was found matching "coredns"
I0307 18:51:45.835643 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:51:45.835702 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:51:45.868007 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:45.868029 26384 cri.go:87] found id: ""
I0307 18:51:45.868038 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:51:45.868098 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:45.872229 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:51:45.872288 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:51:45.900275 26384 cri.go:87] found id: ""
I0307 18:51:45.900301 26384 logs.go:277] 0 containers: []
W0307 18:51:45.900310 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:51:45.900317 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:51:45.900380 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:51:45.928163 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:51:45.928182 26384 cri.go:87] found id: ""
I0307 18:51:45.928189 26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
I0307 18:51:45.928248 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:45.932473 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:51:45.932532 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:51:45.961937 26384 cri.go:87] found id: ""
I0307 18:51:45.961971 26384 logs.go:277] 0 containers: []
W0307 18:51:45.961982 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:51:45.961990 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:51:45.962041 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:51:45.991124 26384 cri.go:87] found id: ""
I0307 18:51:45.991158 26384 logs.go:277] 0 containers: []
W0307 18:51:45.991165 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:51:45.991178 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:51:45.991195 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:51:46.055916 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:51:46.055947 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:51:46.069670 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:51:46.069697 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:51:46.123987 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:51:46.124010 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:51:46.124024 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:46.158206 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:51:46.158235 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:46.234157 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:51:46.234188 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:51:46.277028 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:51:46.277054 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:51:46.331295 26384 logs.go:123] Gathering logs for container status ...
I0307 18:51:46.331325 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:51:46.369056 26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
I0307 18:51:46.369081 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:51:48.902692 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:51:48.903509 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:51:49.240949 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:51:49.241016 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:51:49.270709 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:49.270735 26384 cri.go:87] found id: ""
I0307 18:51:49.270744 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:51:49.270804 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:49.274731 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:51:49.274789 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:51:49.302081 26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:51:49.302100 26384 cri.go:87] found id: ""
I0307 18:51:49.302108 26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
I0307 18:51:49.302166 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:49.306174 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:51:49.306234 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:51:49.333438 26384 cri.go:87] found id: ""
I0307 18:51:49.333461 26384 logs.go:277] 0 containers: []
W0307 18:51:49.333468 26384 logs.go:279] No container was found matching "coredns"
I0307 18:51:49.333474 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:51:49.333527 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:51:49.365533 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:49.365562 26384 cri.go:87] found id: ""
I0307 18:51:49.365569 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:51:49.365610 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:49.369216 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:51:49.369276 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:51:49.398301 26384 cri.go:87] found id: ""
I0307 18:51:49.398326 26384 logs.go:277] 0 containers: []
W0307 18:51:49.398334 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:51:49.398341 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:51:49.398398 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:51:49.427703 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:51:49.427722 26384 cri.go:87] found id: ""
I0307 18:51:49.427730 26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
I0307 18:51:49.427774 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:49.431651 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:51:49.431702 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:51:49.462642 26384 cri.go:87] found id: ""
I0307 18:51:49.462667 26384 logs.go:277] 0 containers: []
W0307 18:51:49.462674 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:51:49.462679 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:51:49.462729 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:51:49.489078 26384 cri.go:87] found id: ""
I0307 18:51:49.489106 26384 logs.go:277] 0 containers: []
W0307 18:51:49.489116 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:51:49.489129 26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
I0307 18:51:49.489140 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:51:49.518966 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:51:49.518994 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:51:49.578313 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:51:49.578343 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:51:49.632259 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:51:49.632280 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:51:49.632292 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:49.665772 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:51:49.665797 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:49.745503 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:51:49.745534 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:51:49.785793 26384 logs.go:123] Gathering logs for container status ...
I0307 18:51:49.785819 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:51:49.821781 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:51:49.821843 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:51:49.888865 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:51:49.888906 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:51:52.403328 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:51:52.403890 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:51:52.741393 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:51:52.741477 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:51:52.770492 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:52.770514 26384 cri.go:87] found id: ""
I0307 18:51:52.770520 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:51:52.770575 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:52.774281 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:51:52.774334 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:51:52.804403 26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:51:52.804427 26384 cri.go:87] found id: ""
I0307 18:51:52.804435 26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
I0307 18:51:52.804480 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:52.808178 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:51:52.808226 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:51:52.836026 26384 cri.go:87] found id: ""
I0307 18:51:52.836048 26384 logs.go:277] 0 containers: []
W0307 18:51:52.836055 26384 logs.go:279] No container was found matching "coredns"
I0307 18:51:52.836060 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:51:52.836118 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:51:52.867795 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:52.867824 26384 cri.go:87] found id: ""
I0307 18:51:52.867834 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:51:52.867891 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:52.871532 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:51:52.871602 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:51:52.899536 26384 cri.go:87] found id: ""
I0307 18:51:52.899558 26384 logs.go:277] 0 containers: []
W0307 18:51:52.899565 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:51:52.899570 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:51:52.899631 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:51:52.927081 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:51:52.927105 26384 cri.go:87] found id: ""
I0307 18:51:52.927114 26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
I0307 18:51:52.927170 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:52.930990 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:51:52.931056 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:51:52.961939 26384 cri.go:87] found id: ""
I0307 18:51:52.961965 26384 logs.go:277] 0 containers: []
W0307 18:51:52.961973 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:51:52.961978 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:51:52.962025 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:51:52.990556 26384 cri.go:87] found id: ""
I0307 18:51:52.990582 26384 logs.go:277] 0 containers: []
W0307 18:51:52.990589 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:51:52.990602 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:51:52.990611 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:51:53.055863 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:51:53.055899 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:51:53.118674 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:51:53.118699 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:51:53.118712 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:51:53.160200 26384 logs.go:123] Gathering logs for container status ...
I0307 18:51:53.160226 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:51:53.193132 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:51:53.193157 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:51:53.206488 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:51:53.206521 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:53.239547 26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
I0307 18:51:53.239575 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:51:53.271150 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:51:53.271179 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:53.355907 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:51:53.355937 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:51:55.915778 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:51:55.916343 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:51:56.240741 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:51:56.240815 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:51:56.276584 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:56.276609 26384 cri.go:87] found id: ""
I0307 18:51:56.276616 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:51:56.276662 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:56.280478 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:51:56.280543 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:51:56.310551 26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:51:56.310580 26384 cri.go:87] found id: ""
I0307 18:51:56.310591 26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
I0307 18:51:56.310652 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:56.314325 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:51:56.314380 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:51:56.345523 26384 cri.go:87] found id: ""
I0307 18:51:56.345545 26384 logs.go:277] 0 containers: []
W0307 18:51:56.345555 26384 logs.go:279] No container was found matching "coredns"
I0307 18:51:56.345562 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:51:56.345613 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:51:56.374295 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:56.374316 26384 cri.go:87] found id: ""
I0307 18:51:56.374325 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:51:56.374369 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:56.377845 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:51:56.377893 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:51:56.407290 26384 cri.go:87] found id: ""
I0307 18:51:56.407314 26384 logs.go:277] 0 containers: []
W0307 18:51:56.407323 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:51:56.407330 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:51:56.407387 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:51:56.434800 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:51:56.434822 26384 cri.go:87] found id: ""
I0307 18:51:56.434831 26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
I0307 18:51:56.434889 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:56.438706 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:51:56.438771 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:51:56.469291 26384 cri.go:87] found id: ""
I0307 18:51:56.469321 26384 logs.go:277] 0 containers: []
W0307 18:51:56.469331 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:51:56.469338 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:51:56.469400 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:51:56.496682 26384 cri.go:87] found id: ""
I0307 18:51:56.496707 26384 logs.go:277] 0 containers: []
W0307 18:51:56.496716 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:51:56.496731 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:51:56.496749 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:51:56.558292 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:51:56.558324 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:51:56.616546 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:51:56.616566 26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
I0307 18:51:56.616576 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:51:56.645444 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:51:56.645482 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:51:56.690522 26384 logs.go:123] Gathering logs for container status ...
I0307 18:51:56.690549 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:51:56.729452 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:51:56.729480 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:51:56.741227 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:51:56.741250 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:56.774040 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:51:56.774069 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:56.851946 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:51:56.851980 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:51:59.410226 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:51:59.410809 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:51:59.741513 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:51:59.741583 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:51:59.770692 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:51:59.770715 26384 cri.go:87] found id: ""
I0307 18:51:59.770723 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:51:59.770773 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:59.774597 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:51:59.774652 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:51:59.802266 26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:51:59.802286 26384 cri.go:87] found id: ""
I0307 18:51:59.802293 26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
I0307 18:51:59.802330 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:59.805853 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:51:59.805892 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:51:59.833448 26384 cri.go:87] found id: ""
I0307 18:51:59.833466 26384 logs.go:277] 0 containers: []
W0307 18:51:59.833473 26384 logs.go:279] No container was found matching "coredns"
I0307 18:51:59.833477 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:51:59.833517 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:51:59.864701 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:51:59.864723 26384 cri.go:87] found id: ""
I0307 18:51:59.864732 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:51:59.864787 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:59.868622 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:51:59.868687 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:51:59.900470 26384 cri.go:87] found id: ""
I0307 18:51:59.900500 26384 logs.go:277] 0 containers: []
W0307 18:51:59.900510 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:51:59.900518 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:51:59.900573 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:51:59.927551 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:51:59.927580 26384 cri.go:87] found id: ""
I0307 18:51:59.927588 26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
I0307 18:51:59.927633 26384 ssh_runner.go:195] Run: which crictl
I0307 18:51:59.931339 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:51:59.931393 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:51:59.959403 26384 cri.go:87] found id: ""
I0307 18:51:59.959426 26384 logs.go:277] 0 containers: []
W0307 18:51:59.959436 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:51:59.959442 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:51:59.959484 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:51:59.987595 26384 cri.go:87] found id: ""
I0307 18:51:59.987616 26384 logs.go:277] 0 containers: []
W0307 18:51:59.987623 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:51:59.987637 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:51:59.987654 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:00.035743 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:52:00.035772 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:52:00.099440 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:52:00.099473 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:00.131520 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:52:00.131549 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:00.208993 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:52:00.209030 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:52:00.267588 26384 logs.go:123] Gathering logs for container status ...
I0307 18:52:00.267622 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:52:00.301447 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:52:00.301476 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:52:00.313284 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:52:00.313307 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:52:00.368862 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:52:00.368881 26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
I0307 18:52:00.368892 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:02.901502 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:52:02.902198 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:52:03.240812 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:52:03.240884 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:52:03.271596 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:03.271623 26384 cri.go:87] found id: ""
I0307 18:52:03.271632 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:52:03.271693 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:03.276075 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:52:03.276140 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:52:03.306294 26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:03.306321 26384 cri.go:87] found id: ""
I0307 18:52:03.306329 26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
I0307 18:52:03.306372 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:03.310127 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:52:03.310195 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:52:03.346928 26384 cri.go:87] found id: ""
I0307 18:52:03.346956 26384 logs.go:277] 0 containers: []
W0307 18:52:03.346964 26384 logs.go:279] No container was found matching "coredns"
I0307 18:52:03.346970 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:52:03.347028 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:52:03.373901 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:03.373935 26384 cri.go:87] found id: ""
I0307 18:52:03.373944 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:52:03.374004 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:03.377726 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:52:03.377816 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:52:03.408820 26384 cri.go:87] found id: ""
I0307 18:52:03.408855 26384 logs.go:277] 0 containers: []
W0307 18:52:03.408862 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:52:03.408880 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:52:03.408938 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:52:03.437027 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:03.437049 26384 cri.go:87] found id: ""
I0307 18:52:03.437060 26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
I0307 18:52:03.437104 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:03.440989 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:52:03.441047 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:52:03.470590 26384 cri.go:87] found id: ""
I0307 18:52:03.470614 26384 logs.go:277] 0 containers: []
W0307 18:52:03.470621 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:52:03.470627 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:52:03.470688 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:52:03.500217 26384 cri.go:87] found id: ""
I0307 18:52:03.500244 26384 logs.go:277] 0 containers: []
W0307 18:52:03.500252 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:52:03.500267 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:52:03.500280 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:52:03.566239 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:52:03.566268 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:52:03.625165 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:52:03.625184 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:52:03.625195 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:52:03.682195 26384 logs.go:123] Gathering logs for container status ...
I0307 18:52:03.682226 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:52:03.719700 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:52:03.719727 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:52:03.731216 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:52:03.731240 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:03.763196 26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
I0307 18:52:03.763229 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:03.791661 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:52:03.791686 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:03.868166 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:52:03.868202 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:06.409727 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:52:06.410322 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:52:06.740737 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:52:06.740806 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:52:06.771108 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:06.771137 26384 cri.go:87] found id: ""
I0307 18:52:06.771144 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:52:06.771189 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:06.775193 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:52:06.775250 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:52:06.806716 26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:06.806737 26384 cri.go:87] found id: ""
I0307 18:52:06.806746 26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
I0307 18:52:06.806795 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:06.810459 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:52:06.810504 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:52:06.837774 26384 cri.go:87] found id: ""
I0307 18:52:06.837797 26384 logs.go:277] 0 containers: []
W0307 18:52:06.837804 26384 logs.go:279] No container was found matching "coredns"
I0307 18:52:06.837809 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:52:06.837860 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:52:06.866218 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:06.866239 26384 cri.go:87] found id: ""
I0307 18:52:06.866249 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:52:06.866303 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:06.869982 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:52:06.870039 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:52:06.899518 26384 cri.go:87] found id: ""
I0307 18:52:06.899546 26384 logs.go:277] 0 containers: []
W0307 18:52:06.899556 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:52:06.899562 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:52:06.899617 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:52:06.927743 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:06.927770 26384 cri.go:87] found id: ""
I0307 18:52:06.927778 26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
I0307 18:52:06.927820 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:06.931549 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:52:06.931613 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:52:06.961419 26384 cri.go:87] found id: ""
I0307 18:52:06.961445 26384 logs.go:277] 0 containers: []
W0307 18:52:06.961452 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:52:06.961457 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:52:06.961518 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:52:06.989502 26384 cri.go:87] found id: ""
I0307 18:52:06.989526 26384 logs.go:277] 0 containers: []
W0307 18:52:06.989532 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:52:06.989546 26384 logs.go:123] Gathering logs for container status ...
I0307 18:52:06.989559 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:52:07.025827 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:52:07.025850 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:52:07.086485 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:52:07.086512 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:52:07.098772 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:52:07.098799 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:07.130198 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:52:07.130225 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:07.212261 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:52:07.212293 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:52:07.268115 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:52:07.268148 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:52:07.330511 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:52:07.330537 26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
I0307 18:52:07.330549 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:07.362299 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:52:07.362331 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:09.904436 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:52:09.905035 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:52:10.241493 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:52:10.241591 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:52:10.270226 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:10.270250 26384 cri.go:87] found id: ""
I0307 18:52:10.270259 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:52:10.270316 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:10.274003 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:52:10.274065 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:52:10.301912 26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:10.301935 26384 cri.go:87] found id: ""
I0307 18:52:10.301943 26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
I0307 18:52:10.301995 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:10.305750 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:52:10.305809 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:52:10.333329 26384 cri.go:87] found id: ""
I0307 18:52:10.333347 26384 logs.go:277] 0 containers: []
W0307 18:52:10.333356 26384 logs.go:279] No container was found matching "coredns"
I0307 18:52:10.333364 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:52:10.333415 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:52:10.365807 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:10.365830 26384 cri.go:87] found id: ""
I0307 18:52:10.365837 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:52:10.365876 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:10.369503 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:52:10.369555 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:52:10.402354 26384 cri.go:87] found id: ""
I0307 18:52:10.402382 26384 logs.go:277] 0 containers: []
W0307 18:52:10.402391 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:52:10.402398 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:52:10.402458 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:52:10.431242 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:10.431268 26384 cri.go:87] found id: ""
I0307 18:52:10.431278 26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
I0307 18:52:10.431331 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:10.435085 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:52:10.435150 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:52:10.462020 26384 cri.go:87] found id: ""
I0307 18:52:10.462044 26384 logs.go:277] 0 containers: []
W0307 18:52:10.462053 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:52:10.462059 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:52:10.462117 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:52:10.492729 26384 cri.go:87] found id: ""
I0307 18:52:10.492755 26384 logs.go:277] 0 containers: []
W0307 18:52:10.492761 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:52:10.492776 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:52:10.492788 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:52:10.550753 26384 logs.go:123] Gathering logs for container status ...
I0307 18:52:10.550787 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:52:10.587328 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:52:10.587353 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:52:10.649658 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:52:10.649690 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:10.688111 26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
I0307 18:52:10.688141 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:10.715243 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:52:10.715271 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:10.794097 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:52:10.794129 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:52:10.806313 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:52:10.806337 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:52:10.859925 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:52:10.859948 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:52:10.859957 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:13.412753 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:52:13.413326 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:52:13.740752 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:52:13.740822 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:52:13.769106 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:13.769130 26384 cri.go:87] found id: ""
I0307 18:52:13.769139 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:52:13.769197 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:13.772932 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:52:13.772977 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:52:13.799190 26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:13.799214 26384 cri.go:87] found id: ""
I0307 18:52:13.799224 26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
I0307 18:52:13.799272 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:13.803163 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:52:13.803229 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:52:13.829114 26384 cri.go:87] found id: ""
I0307 18:52:13.829137 26384 logs.go:277] 0 containers: []
W0307 18:52:13.829143 26384 logs.go:279] No container was found matching "coredns"
I0307 18:52:13.829148 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:52:13.829215 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:52:13.860207 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:13.860232 26384 cri.go:87] found id: ""
I0307 18:52:13.860241 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:52:13.860299 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:13.864306 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:52:13.864365 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:52:13.895421 26384 cri.go:87] found id: ""
I0307 18:52:13.895447 26384 logs.go:277] 0 containers: []
W0307 18:52:13.895456 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:52:13.895464 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:52:13.895523 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:52:13.926222 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:13.926245 26384 cri.go:87] found id: ""
I0307 18:52:13.926252 26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
I0307 18:52:13.926301 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:13.930178 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:52:13.930235 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:52:13.954048 26384 cri.go:87] found id: ""
I0307 18:52:13.954067 26384 logs.go:277] 0 containers: []
W0307 18:52:13.954073 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:52:13.954081 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:52:13.954137 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:52:13.982093 26384 cri.go:87] found id: ""
I0307 18:52:13.982112 26384 logs.go:277] 0 containers: []
W0307 18:52:13.982118 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:52:13.982130 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:52:13.982143 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:52:14.038975 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:52:14.038990 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:52:14.039000 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:14.090619 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:52:14.090645 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:52:14.148386 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:52:14.148418 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:52:14.209750 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:52:14.209782 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:52:14.222299 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:52:14.222320 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:14.259738 26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
I0307 18:52:14.259764 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:14.288148 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:52:14.288183 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:14.364866 26384 logs.go:123] Gathering logs for container status ...
I0307 18:52:14.364898 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:52:16.896622 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:52:16.897179 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:52:17.241681 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:52:17.241765 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:52:17.270963 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:17.270985 26384 cri.go:87] found id: ""
I0307 18:52:17.270994 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:52:17.271055 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:17.274819 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:52:17.274879 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:52:17.303431 26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:17.303455 26384 cri.go:87] found id: ""
I0307 18:52:17.303464 26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
I0307 18:52:17.303516 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:17.307271 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:52:17.307316 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:52:17.336969 26384 cri.go:87] found id: ""
I0307 18:52:17.336994 26384 logs.go:277] 0 containers: []
W0307 18:52:17.337002 26384 logs.go:279] No container was found matching "coredns"
I0307 18:52:17.337009 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:52:17.337061 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:52:17.364451 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:17.364476 26384 cri.go:87] found id: ""
I0307 18:52:17.364484 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:52:17.364543 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:17.368076 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:52:17.368130 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:52:17.395637 26384 cri.go:87] found id: ""
I0307 18:52:17.395660 26384 logs.go:277] 0 containers: []
W0307 18:52:17.395667 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:52:17.395672 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:52:17.395715 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:52:17.423253 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:17.423273 26384 cri.go:87] found id: ""
I0307 18:52:17.423279 26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
I0307 18:52:17.423321 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:17.427005 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:52:17.427060 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:52:17.454713 26384 cri.go:87] found id: ""
I0307 18:52:17.454731 26384 logs.go:277] 0 containers: []
W0307 18:52:17.454736 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:52:17.454742 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:52:17.454784 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:52:17.486176 26384 cri.go:87] found id: ""
I0307 18:52:17.486199 26384 logs.go:277] 0 containers: []
W0307 18:52:17.486206 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:52:17.486219 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:52:17.486229 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:52:17.498032 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:52:17.498055 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:52:17.557073 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:52:17.557097 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:52:17.557110 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:17.594388 26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
I0307 18:52:17.594418 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:17.620305 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:52:17.620338 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:17.702872 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:52:17.702904 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:52:17.759889 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:52:17.759926 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:52:17.817947 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:52:17.817980 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:17.865944 26384 logs.go:123] Gathering logs for container status ...
I0307 18:52:17.865973 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:52:20.398731 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:52:20.399378 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:52:20.740808 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:52:20.740889 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:52:20.774030 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:20.774056 26384 cri.go:87] found id: ""
I0307 18:52:20.774066 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:52:20.774117 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:20.778074 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:52:20.778136 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:52:20.806773 26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:20.806791 26384 cri.go:87] found id: ""
I0307 18:52:20.806798 26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
I0307 18:52:20.806846 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:20.810652 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:52:20.810700 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:52:20.838994 26384 cri.go:87] found id: ""
I0307 18:52:20.839019 26384 logs.go:277] 0 containers: []
W0307 18:52:20.839029 26384 logs.go:279] No container was found matching "coredns"
I0307 18:52:20.839042 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:52:20.839102 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:52:20.869727 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:20.869748 26384 cri.go:87] found id: ""
I0307 18:52:20.869756 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:52:20.869812 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:20.873736 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:52:20.873793 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:52:20.901823 26384 cri.go:87] found id: ""
I0307 18:52:20.901844 26384 logs.go:277] 0 containers: []
W0307 18:52:20.901851 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:52:20.901857 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:52:20.901929 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:52:20.934273 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:20.934298 26384 cri.go:87] found id: ""
I0307 18:52:20.934306 26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
I0307 18:52:20.934356 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:20.938406 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:52:20.938472 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:52:20.969450 26384 cri.go:87] found id: ""
I0307 18:52:20.969479 26384 logs.go:277] 0 containers: []
W0307 18:52:20.969486 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:52:20.969492 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:52:20.969541 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:52:21.001492 26384 cri.go:87] found id: ""
I0307 18:52:21.001514 26384 logs.go:277] 0 containers: []
W0307 18:52:21.001521 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:52:21.001534 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:52:21.001548 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:52:21.054970 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:52:21.054986 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:52:21.054995 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:21.088359 26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
I0307 18:52:21.088383 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:21.120677 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:52:21.120706 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:52:21.182999 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:52:21.183047 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:52:21.245976 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:52:21.246016 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:21.346906 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:52:21.346937 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:21.395390 26384 logs.go:123] Gathering logs for container status ...
I0307 18:52:21.395425 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:52:21.428290 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:52:21.428320 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:52:23.941739 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:52:23.942328 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:52:24.240694 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:52:24.240774 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:52:24.270200 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:24.270223 26384 cri.go:87] found id: ""
I0307 18:52:24.270230 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:52:24.270277 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:24.274395 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:52:24.274459 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:52:24.305875 26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:24.305898 26384 cri.go:87] found id: ""
I0307 18:52:24.305919 26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
I0307 18:52:24.305974 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:24.309735 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:52:24.309791 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:52:24.336466 26384 cri.go:87] found id: ""
I0307 18:52:24.336484 26384 logs.go:277] 0 containers: []
W0307 18:52:24.336493 26384 logs.go:279] No container was found matching "coredns"
I0307 18:52:24.336499 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:52:24.336550 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:52:24.364312 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:24.364337 26384 cri.go:87] found id: ""
I0307 18:52:24.364347 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:52:24.364398 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:24.368537 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:52:24.368610 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:52:24.399307 26384 cri.go:87] found id: ""
I0307 18:52:24.399333 26384 logs.go:277] 0 containers: []
W0307 18:52:24.399343 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:52:24.399350 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:52:24.399410 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:52:24.428137 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:24.428157 26384 cri.go:87] found id: ""
I0307 18:52:24.428165 26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
I0307 18:52:24.428220 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:24.432114 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:52:24.432177 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:52:24.458423 26384 cri.go:87] found id: ""
I0307 18:52:24.458443 26384 logs.go:277] 0 containers: []
W0307 18:52:24.458452 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:52:24.458458 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:52:24.458507 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:52:24.486856 26384 cri.go:87] found id: ""
I0307 18:52:24.486881 26384 logs.go:277] 0 containers: []
W0307 18:52:24.486889 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:52:24.486907 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:52:24.486920 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:24.568604 26384 logs.go:123] Gathering logs for container status ...
I0307 18:52:24.568635 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:52:24.609771 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:52:24.609802 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:52:24.665713 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:52:24.665734 26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
I0307 18:52:24.665752 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:24.691910 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:52:24.691937 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:24.723832 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:52:24.723860 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:24.764806 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:52:24.764833 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:52:24.821496 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:52:24.821529 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:52:24.880200 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:52:24.880230 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:52:27.393632 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:52:27.394219 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:52:27.741710 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:52:27.741782 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:52:27.770323 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:27.770343 26384 cri.go:87] found id: ""
I0307 18:52:27.770349 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:52:27.770405 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:27.774285 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:52:27.774345 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:52:27.800912 26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:27.800933 26384 cri.go:87] found id: ""
I0307 18:52:27.800942 26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
I0307 18:52:27.800991 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:27.804444 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:52:27.804490 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:52:27.836265 26384 cri.go:87] found id: ""
I0307 18:52:27.836290 26384 logs.go:277] 0 containers: []
W0307 18:52:27.836297 26384 logs.go:279] No container was found matching "coredns"
I0307 18:52:27.836303 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:52:27.836359 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:52:27.865231 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:27.865260 26384 cri.go:87] found id: ""
I0307 18:52:27.865269 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:52:27.865317 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:27.869523 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:52:27.869586 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:52:27.900740 26384 cri.go:87] found id: ""
I0307 18:52:27.900770 26384 logs.go:277] 0 containers: []
W0307 18:52:27.900780 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:52:27.900787 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:52:27.900849 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:52:27.929343 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:27.929371 26384 cri.go:87] found id: ""
I0307 18:52:27.929381 26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
I0307 18:52:27.929440 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:27.933280 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:52:27.933348 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:52:27.966078 26384 cri.go:87] found id: ""
I0307 18:52:27.966104 26384 logs.go:277] 0 containers: []
W0307 18:52:27.966111 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:52:27.966119 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:52:27.966175 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:52:27.994539 26384 cri.go:87] found id: ""
I0307 18:52:27.994562 26384 logs.go:277] 0 containers: []
W0307 18:52:27.994568 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:52:27.994581 26384 logs.go:123] Gathering logs for container status ...
I0307 18:52:27.994591 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:52:28.026948 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:52:28.026989 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:52:28.039179 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:52:28.039208 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:52:28.094604 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:52:28.094626 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:52:28.094637 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:28.134457 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:52:28.134490 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:52:28.190768 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:52:28.192394 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:52:28.251450 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:52:28.251489 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:28.285082 26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
I0307 18:52:28.285108 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:28.316724 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:52:28.316750 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:30.901642 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:52:30.902211 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:52:31.241667 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:52:31.241736 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:52:31.271253 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:31.271279 26384 cri.go:87] found id: ""
I0307 18:52:31.271288 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:52:31.271343 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:31.275766 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:52:31.275822 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:52:31.304092 26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:31.304115 26384 cri.go:87] found id: ""
I0307 18:52:31.304121 26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
I0307 18:52:31.304161 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:31.307829 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:52:31.307887 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:52:31.336157 26384 cri.go:87] found id: ""
I0307 18:52:31.336184 26384 logs.go:277] 0 containers: []
W0307 18:52:31.336193 26384 logs.go:279] No container was found matching "coredns"
I0307 18:52:31.336201 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:52:31.336266 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:52:31.362407 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:31.362427 26384 cri.go:87] found id: ""
I0307 18:52:31.362433 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:52:31.362484 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:31.366267 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:52:31.366323 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:52:31.392005 26384 cri.go:87] found id: ""
I0307 18:52:31.392031 26384 logs.go:277] 0 containers: []
W0307 18:52:31.392040 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:52:31.392047 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:52:31.392107 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:52:31.417145 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:31.417164 26384 cri.go:87] found id: ""
I0307 18:52:31.417170 26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
I0307 18:52:31.417226 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:31.421051 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:52:31.421093 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:52:31.452946 26384 cri.go:87] found id: ""
I0307 18:52:31.452966 26384 logs.go:277] 0 containers: []
W0307 18:52:31.452973 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:52:31.452991 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:52:31.453072 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:52:31.482025 26384 cri.go:87] found id: ""
I0307 18:52:31.482048 26384 logs.go:277] 0 containers: []
W0307 18:52:31.482058 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:52:31.482075 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:52:31.482094 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:52:31.535162 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:52:31.535180 26384 logs.go:123] Gathering logs for container status ...
I0307 18:52:31.535190 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:52:31.575114 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:52:31.575149 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:52:31.630597 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:52:31.630629 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:52:31.689816 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:52:31.689854 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:52:31.703439 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:52:31.703465 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:31.733755 26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
I0307 18:52:31.733789 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:31.761485 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:52:31.761517 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:31.849205 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:52:31.849238 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:34.397092 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:52:34.399029 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:52:34.740924 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 18:52:34.741012 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 18:52:34.768741 26384 cri.go:87] found id: "93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:34.768769 26384 cri.go:87] found id: ""
I0307 18:52:34.768776 26384 logs.go:277] 1 containers: [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714]
I0307 18:52:34.768826 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:34.772560 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 18:52:34.772608 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 18:52:34.801197 26384 cri.go:87] found id: "df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:34.801219 26384 cri.go:87] found id: ""
I0307 18:52:34.801226 26384 logs.go:277] 1 containers: [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0]
I0307 18:52:34.801268 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:34.805070 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 18:52:34.805123 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 18:52:34.841217 26384 cri.go:87] found id: ""
I0307 18:52:34.841245 26384 logs.go:277] 0 containers: []
W0307 18:52:34.841258 26384 logs.go:279] No container was found matching "coredns"
I0307 18:52:34.841267 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 18:52:34.841329 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 18:52:34.878585 26384 cri.go:87] found id: "def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:34.878643 26384 cri.go:87] found id: ""
I0307 18:52:34.878663 26384 logs.go:277] 1 containers: [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a]
I0307 18:52:34.878720 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:34.882566 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 18:52:34.882625 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 18:52:34.909524 26384 cri.go:87] found id: ""
I0307 18:52:34.909550 26384 logs.go:277] 0 containers: []
W0307 18:52:34.909557 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 18:52:34.909565 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 18:52:34.909613 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 18:52:34.936954 26384 cri.go:87] found id: "fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:34.936975 26384 cri.go:87] found id: ""
I0307 18:52:34.936983 26384 logs.go:277] 1 containers: [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc]
I0307 18:52:34.937053 26384 ssh_runner.go:195] Run: which crictl
I0307 18:52:34.941502 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 18:52:34.941564 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 18:52:34.971973 26384 cri.go:87] found id: ""
I0307 18:52:34.971995 26384 logs.go:277] 0 containers: []
W0307 18:52:34.972004 26384 logs.go:279] No container was found matching "kindnet"
I0307 18:52:34.972011 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 18:52:34.972070 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 18:52:35.003175 26384 cri.go:87] found id: ""
I0307 18:52:35.003199 26384 logs.go:277] 0 containers: []
W0307 18:52:35.003206 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 18:52:35.003221 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 18:52:35.003233 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 18:52:35.057263 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 18:52:35.057287 26384 logs.go:123] Gathering logs for kube-apiserver [93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714] ...
I0307 18:52:35.057300 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93301a81e7c8a189440fa40cf91f23a2ed9dda6acef62073dc7f710643b88714"
I0307 18:52:35.093840 26384 logs.go:123] Gathering logs for etcd [df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0] ...
I0307 18:52:35.093865 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df4fdafcd01506f0b4b026741527d33cda4ceb39a1380b3367640b9eeedbf5d0"
I0307 18:52:35.131551 26384 logs.go:123] Gathering logs for kube-scheduler [def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a] ...
I0307 18:52:35.131580 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def3c69fe7c257c82579dd4a71b579d28314bf73676e8439efce5e796168916a"
I0307 18:52:35.213034 26384 logs.go:123] Gathering logs for kube-controller-manager [fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc] ...
I0307 18:52:35.213066 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbb60286f148fcd22836c22ccfffdcfb8511432a94175443f4b73e3776c8afbc"
I0307 18:52:35.250410 26384 logs.go:123] Gathering logs for containerd ...
I0307 18:52:35.250442 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 18:52:35.305928 26384 logs.go:123] Gathering logs for kubelet ...
I0307 18:52:35.305959 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 18:52:35.366041 26384 logs.go:123] Gathering logs for container status ...
I0307 18:52:35.366074 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0307 18:52:35.411044 26384 logs.go:123] Gathering logs for dmesg ...
I0307 18:52:35.411068 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 18:52:37.924460 26384 api_server.go:252] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
I0307 18:52:37.925115 26384 api_server.go:268] stopped: https://192.168.39.212:8443/healthz: Get "https://192.168.39.212:8443/healthz": dial tcp 192.168.39.212:8443: connect: connection refused
I0307 18:52:38.240997 26384 kubeadm.go:637] restartCluster took 4m28.730822487s
W0307 18:52:38.241143 26384 out.go:239] ! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: cluster wait timed out during healthz check
I0307 18:52:38.241176 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0307 18:52:39.540779 26384 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.299584283s)
I0307 18:52:39.540844 26384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0307 18:52:39.554353 26384 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0307 18:52:39.563539 26384 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0307 18:52:39.572536 26384 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0307 18:52:39.572574 26384 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0307 18:52:39.609552 26384 kubeadm.go:322] W0307 18:52:39.601196 5604 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
I0307 18:52:39.746961 26384 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0307 18:56:41.125984 26384 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0307 18:56:41.126127 26384 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
I0307 18:56:41.127655 26384 kubeadm.go:322] [init] Using Kubernetes version: v1.24.4
I0307 18:56:41.127696 26384 kubeadm.go:322] [preflight] Running pre-flight checks
I0307 18:56:41.127765 26384 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0307 18:56:41.127875 26384 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0307 18:56:41.127983 26384 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0307 18:56:41.128061 26384 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0307 18:56:41.130326 26384 out.go:204] - Generating certificates and keys ...
I0307 18:56:41.130393 26384 kubeadm.go:322] [certs] Using existing ca certificate authority
I0307 18:56:41.130451 26384 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0307 18:56:41.130531 26384 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0307 18:56:41.130620 26384 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
I0307 18:56:41.130718 26384 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
I0307 18:56:41.130787 26384 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
I0307 18:56:41.130866 26384 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
I0307 18:56:41.130953 26384 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
I0307 18:56:41.131049 26384 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0307 18:56:41.131155 26384 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0307 18:56:41.131217 26384 kubeadm.go:322] [certs] Using the existing "sa" key
I0307 18:56:41.131292 26384 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0307 18:56:41.131363 26384 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0307 18:56:41.131434 26384 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0307 18:56:41.131523 26384 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0307 18:56:41.131603 26384 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0307 18:56:41.131688 26384 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0307 18:56:41.131762 26384 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0307 18:56:41.131795 26384 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0307 18:56:41.131852 26384 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0307 18:56:41.133514 26384 out.go:204] - Booting up control plane ...
I0307 18:56:41.133618 26384 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0307 18:56:41.133699 26384 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0307 18:56:41.133776 26384 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0307 18:56:41.133863 26384 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0307 18:56:41.134051 26384 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0307 18:56:41.134110 26384 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0307 18:56:41.134119 26384 kubeadm.go:322]
I0307 18:56:41.134162 26384 kubeadm.go:322] Unfortunately, an error has occurred:
I0307 18:56:41.134218 26384 kubeadm.go:322] timed out waiting for the condition
I0307 18:56:41.134224 26384 kubeadm.go:322]
I0307 18:56:41.134270 26384 kubeadm.go:322] This error is likely caused by:
I0307 18:56:41.134347 26384 kubeadm.go:322] - The kubelet is not running
I0307 18:56:41.134504 26384 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0307 18:56:41.134517 26384 kubeadm.go:322]
I0307 18:56:41.134650 26384 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0307 18:56:41.134698 26384 kubeadm.go:322] - 'systemctl status kubelet'
I0307 18:56:41.134741 26384 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0307 18:56:41.134760 26384 kubeadm.go:322]
I0307 18:56:41.134863 26384 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0307 18:56:41.134935 26384 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0307 18:56:41.135037 26384 kubeadm.go:322] Here is one example how you may list all running Kubernetes containers by using crictl:
I0307 18:56:41.135174 26384 kubeadm.go:322] - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
I0307 18:56:41.135274 26384 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0307 18:56:41.135447 26384 kubeadm.go:322] - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
W0307 18:56:41.135604 26384 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0307 18:52:39.601196 5604 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I0307 18:56:41.135655 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0307 18:56:42.416834 26384 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.281155319s)
I0307 18:56:42.416897 26384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0307 18:56:42.431050 26384 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0307 18:56:42.440667 26384 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0307 18:56:42.440700 26384 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0307 18:56:42.477411 26384 kubeadm.go:322] W0307 18:56:42.461556 7078 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
I0307 18:56:42.627046 26384 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0307 19:00:43.649484 26384 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0307 19:00:43.649599 26384 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
I0307 19:00:43.651218 26384 kubeadm.go:322] [init] Using Kubernetes version: v1.24.4
I0307 19:00:43.651271 26384 kubeadm.go:322] [preflight] Running pre-flight checks
I0307 19:00:43.651420 26384 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0307 19:00:43.651548 26384 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0307 19:00:43.651725 26384 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0307 19:00:43.651796 26384 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0307 19:00:43.654219 26384 out.go:204] - Generating certificates and keys ...
I0307 19:00:43.654288 26384 kubeadm.go:322] [certs] Using existing ca certificate authority
I0307 19:00:43.654338 26384 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0307 19:00:43.654403 26384 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0307 19:00:43.654458 26384 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
I0307 19:00:43.654514 26384 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
I0307 19:00:43.654563 26384 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
I0307 19:00:43.654618 26384 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
I0307 19:00:43.654668 26384 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
I0307 19:00:43.654730 26384 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0307 19:00:43.654798 26384 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0307 19:00:43.654859 26384 kubeadm.go:322] [certs] Using the existing "sa" key
I0307 19:00:43.654935 26384 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0307 19:00:43.654978 26384 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0307 19:00:43.655070 26384 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0307 19:00:43.655168 26384 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0307 19:00:43.655220 26384 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0307 19:00:43.655347 26384 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0307 19:00:43.655430 26384 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0307 19:00:43.655465 26384 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0307 19:00:43.655523 26384 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0307 19:00:43.657162 26384 out.go:204] - Booting up control plane ...
I0307 19:00:43.657245 26384 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0307 19:00:43.657351 26384 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0307 19:00:43.657442 26384 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0307 19:00:43.657533 26384 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0307 19:00:43.657658 26384 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0307 19:00:43.657699 26384 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0307 19:00:43.657705 26384 kubeadm.go:322]
I0307 19:00:43.657736 26384 kubeadm.go:322] Unfortunately, an error has occurred:
I0307 19:00:43.657782 26384 kubeadm.go:322] timed out waiting for the condition
I0307 19:00:43.657789 26384 kubeadm.go:322]
I0307 19:00:43.657829 26384 kubeadm.go:322] This error is likely caused by:
I0307 19:00:43.657862 26384 kubeadm.go:322] - The kubelet is not running
I0307 19:00:43.657966 26384 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0307 19:00:43.657977 26384 kubeadm.go:322]
I0307 19:00:43.658062 26384 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0307 19:00:43.658091 26384 kubeadm.go:322] - 'systemctl status kubelet'
I0307 19:00:43.658134 26384 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0307 19:00:43.658142 26384 kubeadm.go:322]
I0307 19:00:43.658255 26384 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0307 19:00:43.658393 26384 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0307 19:00:43.658480 26384 kubeadm.go:322] Here is one example how you may list all running Kubernetes containers by using crictl:
I0307 19:00:43.658603 26384 kubeadm.go:322] - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
I0307 19:00:43.658702 26384 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0307 19:00:43.658828 26384 kubeadm.go:322] - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
I0307 19:00:43.658871 26384 kubeadm.go:403] StartCluster complete in 12m34.187466467s
I0307 19:00:43.658927 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0307 19:00:43.658974 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0307 19:00:43.701064 26384 cri.go:87] found id: "4c3f077f022bdea89cb8bf2589173b3be31c0e185e35fd928616ce4549fb87dc"
I0307 19:00:43.701086 26384 cri.go:87] found id: ""
I0307 19:00:43.701098 26384 logs.go:277] 1 containers: [4c3f077f022bdea89cb8bf2589173b3be31c0e185e35fd928616ce4549fb87dc]
I0307 19:00:43.701142 26384 ssh_runner.go:195] Run: which crictl
I0307 19:00:43.705362 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0307 19:00:43.705417 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0307 19:00:43.734452 26384 cri.go:87] found id: "c6ea84a251b2a68faf0c7bc662a34e8da962550ddfb0892eac5c9cabe219fd56"
I0307 19:00:43.734469 26384 cri.go:87] found id: ""
I0307 19:00:43.734476 26384 logs.go:277] 1 containers: [c6ea84a251b2a68faf0c7bc662a34e8da962550ddfb0892eac5c9cabe219fd56]
I0307 19:00:43.734531 26384 ssh_runner.go:195] Run: which crictl
I0307 19:00:43.739954 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0307 19:00:43.740015 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0307 19:00:43.766381 26384 cri.go:87] found id: ""
I0307 19:00:43.766402 26384 logs.go:277] 0 containers: []
W0307 19:00:43.766408 26384 logs.go:279] No container was found matching "coredns"
I0307 19:00:43.766413 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0307 19:00:43.766453 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0307 19:00:43.796840 26384 cri.go:87] found id: "1d5f6f3ec60ee126296dc37837b2c164122f271fbf16e8adf26153a72448ce41"
I0307 19:00:43.796867 26384 cri.go:87] found id: ""
I0307 19:00:43.796875 26384 logs.go:277] 1 containers: [1d5f6f3ec60ee126296dc37837b2c164122f271fbf16e8adf26153a72448ce41]
I0307 19:00:43.796929 26384 ssh_runner.go:195] Run: which crictl
I0307 19:00:43.801100 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0307 19:00:43.801154 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0307 19:00:43.830552 26384 cri.go:87] found id: ""
I0307 19:00:43.830577 26384 logs.go:277] 0 containers: []
W0307 19:00:43.830584 26384 logs.go:279] No container was found matching "kube-proxy"
I0307 19:00:43.830589 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0307 19:00:43.830637 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0307 19:00:43.867303 26384 cri.go:87] found id: "8f74b327d355ba8b122085b2bd262e7f6a18dde235bc9efbb62fef4f6f4a4c06"
I0307 19:00:43.867324 26384 cri.go:87] found id: ""
I0307 19:00:43.867331 26384 logs.go:277] 1 containers: [8f74b327d355ba8b122085b2bd262e7f6a18dde235bc9efbb62fef4f6f4a4c06]
I0307 19:00:43.867370 26384 ssh_runner.go:195] Run: which crictl
I0307 19:00:43.871114 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0307 19:00:43.871164 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0307 19:00:43.904677 26384 cri.go:87] found id: ""
I0307 19:00:43.904703 26384 logs.go:277] 0 containers: []
W0307 19:00:43.904709 26384 logs.go:279] No container was found matching "kindnet"
I0307 19:00:43.904715 26384 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0307 19:00:43.904758 26384 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0307 19:00:43.944324 26384 cri.go:87] found id: ""
I0307 19:00:43.944349 26384 logs.go:277] 0 containers: []
W0307 19:00:43.944359 26384 logs.go:279] No container was found matching "storage-provisioner"
I0307 19:00:43.944378 26384 logs.go:123] Gathering logs for containerd ...
I0307 19:00:43.944395 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0307 19:00:44.011972 26384 logs.go:123] Gathering logs for kubelet ...
I0307 19:00:44.012003 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0307 19:00:44.077224 26384 logs.go:123] Gathering logs for dmesg ...
I0307 19:00:44.077258 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0307 19:00:44.091281 26384 logs.go:123] Gathering logs for describe nodes ...
I0307 19:00:44.091305 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0307 19:00:44.158036 26384 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0307 19:00:44.158054 26384 logs.go:123] Gathering logs for etcd [c6ea84a251b2a68faf0c7bc662a34e8da962550ddfb0892eac5c9cabe219fd56] ...
I0307 19:00:44.158065 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6ea84a251b2a68faf0c7bc662a34e8da962550ddfb0892eac5c9cabe219fd56"
I0307 19:00:44.193518 26384 logs.go:123] Gathering logs for kube-scheduler [1d5f6f3ec60ee126296dc37837b2c164122f271fbf16e8adf26153a72448ce41] ...
I0307 19:00:44.193546 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d5f6f3ec60ee126296dc37837b2c164122f271fbf16e8adf26153a72448ce41"
I0307 19:00:44.281107 26384 logs.go:123] Gathering logs for kube-apiserver [4c3f077f022bdea89cb8bf2589173b3be31c0e185e35fd928616ce4549fb87dc] ...
I0307 19:00:44.281138 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c3f077f022bdea89cb8bf2589173b3be31c0e185e35fd928616ce4549fb87dc"
I0307 19:00:44.321328 26384 logs.go:123] Gathering logs for kube-controller-manager [8f74b327d355ba8b122085b2bd262e7f6a18dde235bc9efbb62fef4f6f4a4c06] ...
I0307 19:00:44.321353 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f74b327d355ba8b122085b2bd262e7f6a18dde235bc9efbb62fef4f6f4a4c06"
I0307 19:00:44.370028 26384 logs.go:123] Gathering logs for container status ...
I0307 19:00:44.370058 26384 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W0307 19:00:44.410088 26384 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0307 18:56:42.461556 7078 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0307 19:00:44.410135 26384 out.go:239] *
W0307 19:00:44.410302 26384 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0307 18:56:42.461556 7078 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0307 19:00:44.410323 26384 out.go:239] *
W0307 19:00:44.411225 26384 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0307 19:00:44.414682 26384 out.go:177]
W0307 19:00:44.416349 26384 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0307 18:56:42.461556 7078 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0307 19:00:44.416447 26384 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0307 19:00:44.416516 26384 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
I0307 19:00:44.419274 26384 out.go:177]
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
8f74b327d355b 1f99cb6da9a82 About a minute ago Exited kube-controller-manager 15 43e8a5d7973c1
c6ea84a251b2a aebe758cef4cd About a minute ago Exited etcd 17 6336c6d20265b
4c3f077f022bd 6cab9d1bed1be About a minute ago Exited kube-apiserver 14 a48cee835eb73
1d5f6f3ec60ee 03fa22539fc1c 4 minutes ago Running kube-scheduler 3 a639f60172172
*
* ==> containerd <==
* -- Journal begins at Tue 2023-03-07 18:47:44 UTC, ends at Tue 2023-03-07 19:00:45 UTC. --
Mar 07 18:59:43 test-preload-203208 containerd[632]: time="2023-03-07T18:59:43.352629573Z" level=warning msg="cleaning up after shim disconnected" id=c6ea84a251b2a68faf0c7bc662a34e8da962550ddfb0892eac5c9cabe219fd56 namespace=k8s.io
Mar 07 18:59:43 test-preload-203208 containerd[632]: time="2023-03-07T18:59:43.352677227Z" level=info msg="cleaning up dead shim"
Mar 07 18:59:43 test-preload-203208 containerd[632]: time="2023-03-07T18:59:43.367324561Z" level=warning msg="cleanup warnings time=\"2023-03-07T18:59:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=8153 runtime=io.containerd.runc.v2\ntime=\"2023-03-07T18:59:43Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/c6ea84a251b2a68faf0c7bc662a34e8da962550ddfb0892eac5c9cabe219fd56/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n"
Mar 07 18:59:43 test-preload-203208 containerd[632]: time="2023-03-07T18:59:43.367613601Z" level=error msg="copy shim log" error="read /proc/self/fd/46: file already closed"
Mar 07 18:59:43 test-preload-203208 containerd[632]: time="2023-03-07T18:59:43.368034875Z" level=error msg="Failed to pipe stdout of container \"c6ea84a251b2a68faf0c7bc662a34e8da962550ddfb0892eac5c9cabe219fd56\"" error="reading from a closed fifo"
Mar 07 18:59:43 test-preload-203208 containerd[632]: time="2023-03-07T18:59:43.369103455Z" level=error msg="Failed to pipe stderr of container \"c6ea84a251b2a68faf0c7bc662a34e8da962550ddfb0892eac5c9cabe219fd56\"" error="reading from a closed fifo"
Mar 07 18:59:43 test-preload-203208 containerd[632]: time="2023-03-07T18:59:43.374664547Z" level=error msg="StartContainer for \"c6ea84a251b2a68faf0c7bc662a34e8da962550ddfb0892eac5c9cabe219fd56\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: \"etcd\": executable file not found in $PATH: unknown"
Mar 07 18:59:43 test-preload-203208 containerd[632]: time="2023-03-07T18:59:43.502502972Z" level=info msg="RemoveContainer for \"f3ca8f12165168ac992c4913fc9ad7f88f5bbbd04ae7a7460359a1cdec15f0d2\""
Mar 07 18:59:43 test-preload-203208 containerd[632]: time="2023-03-07T18:59:43.511763714Z" level=info msg="RemoveContainer for \"f3ca8f12165168ac992c4913fc9ad7f88f5bbbd04ae7a7460359a1cdec15f0d2\" returns successfully"
Mar 07 18:59:44 test-preload-203208 containerd[632]: time="2023-03-07T18:59:44.969321507Z" level=info msg="CreateContainer within sandbox \"43e8a5d7973c13866b592527eea80575bff1fcfbd65b345924df45a4e2137ade\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:15,}"
Mar 07 18:59:44 test-preload-203208 containerd[632]: time="2023-03-07T18:59:44.992714494Z" level=info msg="CreateContainer within sandbox \"43e8a5d7973c13866b592527eea80575bff1fcfbd65b345924df45a4e2137ade\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:15,} returns container id \"8f74b327d355ba8b122085b2bd262e7f6a18dde235bc9efbb62fef4f6f4a4c06\""
Mar 07 18:59:44 test-preload-203208 containerd[632]: time="2023-03-07T18:59:44.993581903Z" level=info msg="StartContainer for \"8f74b327d355ba8b122085b2bd262e7f6a18dde235bc9efbb62fef4f6f4a4c06\""
Mar 07 18:59:45 test-preload-203208 containerd[632]: time="2023-03-07T18:59:45.330149372Z" level=info msg="StartContainer for \"8f74b327d355ba8b122085b2bd262e7f6a18dde235bc9efbb62fef4f6f4a4c06\" returns successfully"
Mar 07 18:59:51 test-preload-203208 containerd[632]: time="2023-03-07T18:59:51.925430491Z" level=info msg="shim disconnected" id=4c3f077f022bdea89cb8bf2589173b3be31c0e185e35fd928616ce4549fb87dc
Mar 07 18:59:51 test-preload-203208 containerd[632]: time="2023-03-07T18:59:51.925556054Z" level=warning msg="cleaning up after shim disconnected" id=4c3f077f022bdea89cb8bf2589173b3be31c0e185e35fd928616ce4549fb87dc namespace=k8s.io
Mar 07 18:59:51 test-preload-203208 containerd[632]: time="2023-03-07T18:59:51.925568402Z" level=info msg="cleaning up dead shim"
Mar 07 18:59:51 test-preload-203208 containerd[632]: time="2023-03-07T18:59:51.938637174Z" level=warning msg="cleanup warnings time=\"2023-03-07T18:59:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=8216 runtime=io.containerd.runc.v2\n"
Mar 07 18:59:52 test-preload-203208 containerd[632]: time="2023-03-07T18:59:52.527968449Z" level=info msg="RemoveContainer for \"16b2d8e8669683f1b0ae8136038cd8f61eb5d0c9ba63472d90cc6dbc04d1edef\""
Mar 07 18:59:52 test-preload-203208 containerd[632]: time="2023-03-07T18:59:52.534435720Z" level=info msg="RemoveContainer for \"16b2d8e8669683f1b0ae8136038cd8f61eb5d0c9ba63472d90cc6dbc04d1edef\" returns successfully"
Mar 07 19:00:02 test-preload-203208 containerd[632]: time="2023-03-07T19:00:02.935817898Z" level=info msg="shim disconnected" id=8f74b327d355ba8b122085b2bd262e7f6a18dde235bc9efbb62fef4f6f4a4c06
Mar 07 19:00:02 test-preload-203208 containerd[632]: time="2023-03-07T19:00:02.935938523Z" level=warning msg="cleaning up after shim disconnected" id=8f74b327d355ba8b122085b2bd262e7f6a18dde235bc9efbb62fef4f6f4a4c06 namespace=k8s.io
Mar 07 19:00:02 test-preload-203208 containerd[632]: time="2023-03-07T19:00:02.935952631Z" level=info msg="cleaning up dead shim"
Mar 07 19:00:02 test-preload-203208 containerd[632]: time="2023-03-07T19:00:02.951309595Z" level=warning msg="cleanup warnings time=\"2023-03-07T19:00:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=8245 runtime=io.containerd.runc.v2\n"
Mar 07 19:00:03 test-preload-203208 containerd[632]: time="2023-03-07T19:00:03.555081443Z" level=info msg="RemoveContainer for \"402b33a0acb4db523599b4b0c7a961bf445a627e88ad8730be8d0e408479454f\""
Mar 07 19:00:03 test-preload-203208 containerd[632]: time="2023-03-07T19:00:03.560489063Z" level=info msg="RemoveContainer for \"402b33a0acb4db523599b4b0c7a961bf445a627e88ad8730be8d0e408479454f\" returns successfully"
*
* ==> describe nodes <==
*
* ==> dmesg <==
* [Mar 7 18:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
[ +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[ +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ +0.069940] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +3.931123] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +3.246595] systemd-fstab-generator[114]: Ignoring "noauto" for root device
[ +0.147269] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
[ +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +2.398341] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[Mar 7 18:48] systemd-fstab-generator[529]: Ignoring "noauto" for root device
[ +2.816811] systemd-fstab-generator[561]: Ignoring "noauto" for root device
[ +0.104429] systemd-fstab-generator[572]: Ignoring "noauto" for root device
[ +0.137298] systemd-fstab-generator[585]: Ignoring "noauto" for root device
[ +0.103680] systemd-fstab-generator[596]: Ignoring "noauto" for root device
[ +0.237292] systemd-fstab-generator[623]: Ignoring "noauto" for root device
[ +13.571443] systemd-fstab-generator[818]: Ignoring "noauto" for root device
[Mar 7 18:52] systemd-fstab-generator[5678]: Ignoring "noauto" for root device
[Mar 7 18:56] systemd-fstab-generator[7151]: Ignoring "noauto" for root device
*
* ==> etcd [c6ea84a251b2a68faf0c7bc662a34e8da962550ddfb0892eac5c9cabe219fd56] <==
*
*
* ==> kernel <==
* 19:00:45 up 13 min, 0 users, load average: 0.16, 0.27, 0.16
Linux test-preload-203208 5.10.57 #1 SMP Fri Feb 24 23:00:41 UTC 2023 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kube-apiserver [4c3f077f022bdea89cb8bf2589173b3be31c0e185e35fd928616ce4549fb87dc] <==
* I0307 18:59:31.397729 1 server.go:558] external host was not specified, using 192.168.39.212
I0307 18:59:31.398685 1 server.go:158] Version: v1.24.4
I0307 18:59:31.398771 1 server.go:160] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0307 18:59:31.880843 1 shared_informer.go:255] Waiting for caches to sync for node_authorizer
I0307 18:59:31.882238 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0307 18:59:31.882250 1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0307 18:59:31.883546 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0307 18:59:31.883559 1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
W0307 18:59:31.886102 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0307 18:59:32.881739 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0307 18:59:32.886774 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0307 18:59:33.882167 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0307 18:59:34.739525 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0307 18:59:35.553368 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0307 18:59:37.264279 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0307 18:59:38.341743 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0307 18:59:40.891684 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0307 18:59:41.954797 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0307 18:59:48.013588 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0307 18:59:48.078321 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
E0307 18:59:51.886026 1 run.go:74] "command failed" err="context deadline exceeded"
*
* ==> kube-controller-manager [8f74b327d355ba8b122085b2bd262e7f6a18dde235bc9efbb62fef4f6f4a4c06] <==
* vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:190 +0x2f6
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run.func1()
vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:165 +0x3c
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x0?)
vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x3e
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x68e500?, {0x4d010e0, 0xc001023260}, 0x1, 0xc0000dc7e0)
vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xb6
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00012e008?, 0xdf8475800, 0x0, 0x80?, 0xc0003aede0?)
vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x89
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0x0?, 0xc000101860?, 0x0?)
vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x25
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run
vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:164 +0x372
goroutine 148 [syscall]:
syscall.Syscall6(0xe8, 0xe, 0xc00108fc14, 0x7, 0xffffffffffffffff, 0x0, 0x0)
/usr/local/go/src/syscall/asm_linux_amd64.s:43 +0x5
k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0x0?, {0xc00108fc14?, 0x0?, 0x0?}, 0x0?)
vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:56 +0x58
k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc0002000e0)
vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x7d
k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc000357220)
vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x26e
created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1c5
*
* ==> kube-scheduler [1d5f6f3ec60ee126296dc37837b2c164122f271fbf16e8adf26153a72448ce41] <==
* E0307 18:59:53.386448 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.212:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
W0307 18:59:57.386066 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: Get "https://192.168.39.212:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
E0307 18:59:57.386144 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.212:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
W0307 19:00:00.576353 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get "https://192.168.39.212:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
E0307 19:00:00.576441 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.212:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
W0307 19:00:21.817814 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: Get "https://192.168.39.212:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
E0307 19:00:21.818034 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.212:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
W0307 19:00:22.223130 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.212:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
E0307 19:00:22.223206 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.212:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
W0307 19:00:24.282317 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: Get "https://192.168.39.212:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
E0307 19:00:24.282410 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.212:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
W0307 19:00:27.386083 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get "https://192.168.39.212:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
E0307 19:00:27.386155 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.212:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
W0307 19:00:27.715416 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get "https://192.168.39.212:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
E0307 19:00:27.715477 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.212:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
W0307 19:00:29.333542 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: Get "https://192.168.39.212:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
E0307 19:00:29.333615 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.212:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
W0307 19:00:37.541286 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Get "https://192.168.39.212:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
E0307 19:00:37.541371 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.212:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
W0307 19:00:38.660515 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.212:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
E0307 19:00:38.660564 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.212:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
W0307 19:00:39.252011 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: Get "https://192.168.39.212:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
E0307 19:00:39.252093 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.212:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
W0307 19:00:39.383693 1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.212:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
E0307 19:00:39.383775 1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.212:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
*
* ==> kubelet <==
* -- Journal begins at Tue 2023-03-07 18:47:44 UTC, ends at Tue 2023-03-07 19:00:45 UTC. --
Mar 07 19:00:44 test-preload-203208 kubelet[7157]: E0307 19:00:44.029488 7157 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Mar 07 19:00:44 test-preload-203208 kubelet[7157]: E0307 19:00:44.113039 7157 kubelet.go:2424] "Error getting node" err="node \"test-preload-203208\" not found"
Mar 07 19:00:44 test-preload-203208 kubelet[7157]: E0307 19:00:44.213810 7157 kubelet.go:2424] "Error getting node" err="node \"test-preload-203208\" not found"
Mar 07 19:00:44 test-preload-203208 kubelet[7157]: E0307 19:00:44.314784 7157 kubelet.go:2424] "Error getting node" err="node \"test-preload-203208\" not found"
Mar 07 19:00:44 test-preload-203208 kubelet[7157]: E0307 19:00:44.415438 7157 kubelet.go:2424] "Error getting node" err="node \"test-preload-203208\" not found"
Mar 07 19:00:44 test-preload-203208 kubelet[7157]: E0307 19:00:44.515835 7157 kubelet.go:2424] "Error getting node" err="node \"test-preload-203208\" not found"
Mar 07 19:00:44 test-preload-203208 kubelet[7157]: W0307 19:00:44.562175 7157 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)test-preload-203208&limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
Mar 07 19:00:44 test-preload-203208 kubelet[7157]: E0307 19:00:44.562355 7157 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)test-preload-203208&limit=500&resourceVersion=0": dial tcp 192.168.39.212:8443: connect: connection refused
Mar 07 19:00:44 test-preload-203208 kubelet[7157]: E0307 19:00:44.617031 7157 kubelet.go:2424] "Error getting node" err="node \"test-preload-203208\" not found"
Mar 07 19:00:44 test-preload-203208 kubelet[7157]: E0307 19:00:44.717396 7157 kubelet.go:2424] "Error getting node" err="node \"test-preload-203208\" not found"
Mar 07 19:00:44 test-preload-203208 kubelet[7157]: E0307 19:00:44.818448 7157 kubelet.go:2424] "Error getting node" err="node \"test-preload-203208\" not found"
Mar 07 19:00:44 test-preload-203208 kubelet[7157]: E0307 19:00:44.918996 7157 kubelet.go:2424] "Error getting node" err="node \"test-preload-203208\" not found"
Mar 07 19:00:44 test-preload-203208 kubelet[7157]: I0307 19:00:44.966047 7157 scope.go:110] "RemoveContainer" containerID="c6ea84a251b2a68faf0c7bc662a34e8da962550ddfb0892eac5c9cabe219fd56"
Mar 07 19:00:44 test-preload-203208 kubelet[7157]: I0307 19:00:44.966085 7157 scope.go:110] "RemoveContainer" containerID="8f74b327d355ba8b122085b2bd262e7f6a18dde235bc9efbb62fef4f6f4a4c06"
Mar 07 19:00:44 test-preload-203208 kubelet[7157]: E0307 19:00:44.966373 7157 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-test-preload-203208_kube-system(15302bf5fc252d83d35e6df26d8799f5)\"" pod="kube-system/kube-controller-manager-test-preload-203208" podUID=15302bf5fc252d83d35e6df26d8799f5
Mar 07 19:00:44 test-preload-203208 kubelet[7157]: E0307 19:00:44.966370 7157 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-test-preload-203208_kube-system(6bf068956ab0be326534b38dbab322fb)\"" pod="kube-system/etcd-test-preload-203208" podUID=6bf068956ab0be326534b38dbab322fb
Mar 07 19:00:45 test-preload-203208 kubelet[7157]: E0307 19:00:45.019222 7157 kubelet.go:2424] "Error getting node" err="node \"test-preload-203208\" not found"
Mar 07 19:00:45 test-preload-203208 kubelet[7157]: E0307 19:00:45.120032 7157 kubelet.go:2424] "Error getting node" err="node \"test-preload-203208\" not found"
Mar 07 19:00:45 test-preload-203208 kubelet[7157]: E0307 19:00:45.221223 7157 kubelet.go:2424] "Error getting node" err="node \"test-preload-203208\" not found"
Mar 07 19:00:45 test-preload-203208 kubelet[7157]: E0307 19:00:45.321971 7157 kubelet.go:2424] "Error getting node" err="node \"test-preload-203208\" not found"
Mar 07 19:00:45 test-preload-203208 kubelet[7157]: E0307 19:00:45.422576 7157 kubelet.go:2424] "Error getting node" err="node \"test-preload-203208\" not found"
Mar 07 19:00:45 test-preload-203208 kubelet[7157]: E0307 19:00:45.523238 7157 kubelet.go:2424] "Error getting node" err="node \"test-preload-203208\" not found"
Mar 07 19:00:45 test-preload-203208 kubelet[7157]: E0307 19:00:45.623360 7157 kubelet.go:2424] "Error getting node" err="node \"test-preload-203208\" not found"
Mar 07 19:00:45 test-preload-203208 kubelet[7157]: E0307 19:00:45.724364 7157 kubelet.go:2424] "Error getting node" err="node \"test-preload-203208\" not found"
Mar 07 19:00:45 test-preload-203208 kubelet[7157]: E0307 19:00:45.825139 7157 kubelet.go:2424] "Error getting node" err="node \"test-preload-203208\" not found"
-- /stdout --
** stderr **
E0307 19:00:45.669001 26792 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
! unable to fetch logs for: describe nodes
** /stderr **
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-203208 -n test-preload-203208
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-203208 -n test-preload-203208: exit status 2 (221.514824ms)
-- stdout --
Stopped
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "test-preload-203208" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-203208" profile ...
helpers_test.go:178: (dbg) Run: out/minikube-linux-amd64 delete -p test-preload-203208
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-203208: (1.196438525s)
--- FAIL: TestPreload (1036.22s)