=== RUN TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run: out/minikube-darwin-amd64 start -p default-k8s-diff-port-603000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit --kubernetes-version=v1.28.4
E0213 15:53:39.083776 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/no-preload-355000/client.crt: no such file or directory
E0213 15:53:39.089507 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/no-preload-355000/client.crt: no such file or directory
E0213 15:53:39.099669 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/no-preload-355000/client.crt: no such file or directory
E0213 15:53:39.120976 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/no-preload-355000/client.crt: no such file or directory
E0213 15:53:39.162304 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/no-preload-355000/client.crt: no such file or directory
E0213 15:53:39.242806 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/no-preload-355000/client.crt: no such file or directory
E0213 15:53:39.404047 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/no-preload-355000/client.crt: no such file or directory
E0213 15:53:39.726039 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/no-preload-355000/client.crt: no such file or directory
E0213 15:53:40.367566 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/no-preload-355000/client.crt: no such file or directory
E0213 15:53:41.649174 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/no-preload-355000/client.crt: no such file or directory
E0213 15:53:44.366309 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/no-preload-355000/client.crt: no such file or directory
E0213 15:53:49.486909 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/no-preload-355000/client.crt: no such file or directory
E0213 15:53:50.861370 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/false-599000/client.crt: no such file or directory
E0213 15:53:53.826648 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/old-k8s-version-481000/client.crt: no such file or directory
E0213 15:53:53.832007 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/old-k8s-version-481000/client.crt: no such file or directory
E0213 15:53:53.842688 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/old-k8s-version-481000/client.crt: no such file or directory
E0213 15:53:53.862790 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/old-k8s-version-481000/client.crt: no such file or directory
E0213 15:53:53.904217 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/old-k8s-version-481000/client.crt: no such file or directory
E0213 15:53:53.984514 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/old-k8s-version-481000/client.crt: no such file or directory
E0213 15:53:54.144963 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/old-k8s-version-481000/client.crt: no such file or directory
E0213 15:53:54.466062 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/old-k8s-version-481000/client.crt: no such file or directory
E0213 15:53:55.107295 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/old-k8s-version-481000/client.crt: no such file or directory
E0213 15:53:56.387490 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/old-k8s-version-481000/client.crt: no such file or directory
E0213 15:53:58.947807 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/old-k8s-version-481000/client.crt: no such file or directory
E0213 15:53:59.729067 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/no-preload-355000/client.crt: no such file or directory
E0213 15:54:01.425172 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/enable-default-cni-599000/client.crt: no such file or directory
E0213 15:54:04.004350 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/addons-679000/client.crt: no such file or directory
E0213 15:54:04.068291 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/old-k8s-version-481000/client.crt: no such file or directory
E0213 15:54:05.720302 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/ingress-addon-legacy-620000/client.crt: no such file or directory
E0213 15:54:14.309238 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/old-k8s-version-481000/client.crt: no such file or directory
E0213 15:54:20.211065 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/no-preload-355000/client.crt: no such file or directory
E0213 15:54:34.789960 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/old-k8s-version-481000/client.crt: no such file or directory
E0213 15:54:35.951504 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/skaffold-220000/client.crt: no such file or directory
E0213 15:54:43.079495 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/functional-634000/client.crt: no such file or directory
E0213 15:55:00.028678 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/functional-634000/client.crt: no such file or directory
E0213 15:55:01.173464 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/no-preload-355000/client.crt: no such file or directory
E0213 15:55:15.751304 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/old-k8s-version-481000/client.crt: no such file or directory
E0213 15:55:25.204684 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/flannel-599000/client.crt: no such file or directory
E0213 15:55:26.503121 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/auto-599000/client.crt: no such file or directory
E0213 15:55:46.802980 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/kindnet-599000/client.crt: no such file or directory
E0213 15:56:07.842986 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/bridge-599000/client.crt: no such file or directory
E0213 15:56:23.096529 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/no-preload-355000/client.crt: no such file or directory
E0213 15:56:37.673833 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/old-k8s-version-481000/client.crt: no such file or directory
E0213 15:56:49.556064 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/auto-599000/client.crt: no such file or directory
E0213 15:57:04.527508 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/kubenet-599000/client.crt: no such file or directory
E0213 15:57:09.856383 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/kindnet-599000/client.crt: no such file or directory
E0213 15:57:16.917805 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/calico-599000/client.crt: no such file or directory
E0213 15:57:25.138502 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/custom-flannel-599000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p default-k8s-diff-port-603000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit --kubernetes-version=v1.28.4: exit status 80 (6m43.504120504s)
-- stdout --
* [default-k8s-diff-port-603000] minikube v1.32.0 on Darwin 14.3.1
- MINIKUBE_LOCATION=18169
- KUBECONFIG=/Users/jenkins/minikube-integration/18169-2790/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18169-2790/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the hyperkit driver based on existing profile
* Starting control plane node default-k8s-diff-port-603000 in cluster default-k8s-diff-port-603000
* Restarting existing hyperkit VM for "default-k8s-diff-port-603000" ...
* Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image fake.domain/registry.k8s.io/echoserver:1.4
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
- Using image docker.io/kubernetesui/dashboard:v2.7.0
- Using image registry.k8s.io/echoserver:1.4
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p default-k8s-diff-port-603000 addons enable metrics-server
* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
-- /stdout --
** stderr **
I0213 15:53:39.031988 10919 out.go:291] Setting OutFile to fd 1 ...
I0213 15:53:39.032264 10919 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 15:53:39.032269 10919 out.go:304] Setting ErrFile to fd 2...
I0213 15:53:39.032273 10919 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 15:53:39.032469 10919 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18169-2790/.minikube/bin
I0213 15:53:39.033910 10919 out.go:298] Setting JSON to false
I0213 15:53:39.057539 10919 start.go:128] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":4593,"bootTime":1707863826,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
W0213 15:53:39.057666 10919 start.go:136] gopshost.Virtualization returned error: not implemented yet
I0213 15:53:39.079137 10919 out.go:177] * [default-k8s-diff-port-603000] minikube v1.32.0 on Darwin 14.3.1
I0213 15:53:39.190219 10919 out.go:177] - MINIKUBE_LOCATION=18169
I0213 15:53:39.153162 10919 notify.go:220] Checking for updates...
I0213 15:53:39.264965 10919 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/18169-2790/kubeconfig
I0213 15:53:39.323341 10919 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0213 15:53:39.420312 10919 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0213 15:53:39.442029 10919 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18169-2790/.minikube
I0213 15:53:39.484115 10919 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0213 15:53:39.505671 10919 config.go:182] Loaded profile config "default-k8s-diff-port-603000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0213 15:53:39.506320 10919 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0213 15:53:39.506407 10919 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0213 15:53:39.515545 10919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57416
I0213 15:53:39.515892 10919 main.go:141] libmachine: () Calling .GetVersion
I0213 15:53:39.516349 10919 main.go:141] libmachine: Using API Version 1
I0213 15:53:39.516379 10919 main.go:141] libmachine: () Calling .SetConfigRaw
I0213 15:53:39.516594 10919 main.go:141] libmachine: () Calling .GetMachineName
I0213 15:53:39.516701 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .DriverName
I0213 15:53:39.516901 10919 driver.go:392] Setting default libvirt URI to qemu:///system
I0213 15:53:39.517143 10919 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0213 15:53:39.517169 10919 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0213 15:53:39.525054 10919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57418
I0213 15:53:39.525383 10919 main.go:141] libmachine: () Calling .GetVersion
I0213 15:53:39.525755 10919 main.go:141] libmachine: Using API Version 1
I0213 15:53:39.525771 10919 main.go:141] libmachine: () Calling .SetConfigRaw
I0213 15:53:39.525999 10919 main.go:141] libmachine: () Calling .GetMachineName
I0213 15:53:39.526104 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .DriverName
I0213 15:53:39.555221 10919 out.go:177] * Using the hyperkit driver based on existing profile
I0213 15:53:39.596896 10919 start.go:298] selected driver: hyperkit
I0213 15:53:39.596919 10919 start.go:902] validating driver "hyperkit" against &{Name:default-k8s-diff-port-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:
22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-603000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.169.0.44 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Schedule
dStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
I0213 15:53:39.597143 10919 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0213 15:53:39.601254 10919 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0213 15:53:39.601354 10919 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/18169-2790/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
I0213 15:53:39.609098 10919 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.32.0
I0213 15:53:39.614409 10919 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0213 15:53:39.614432 10919 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
I0213 15:53:39.614600 10919 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0213 15:53:39.614663 10919 cni.go:84] Creating CNI manager for ""
I0213 15:53:39.614676 10919 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0213 15:53:39.614687 10919 start_flags.go:321] config:
{Name:default-k8s-diff-port-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-6
03000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.169.0.44 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fal
se ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
I0213 15:53:39.614825 10919 iso.go:125] acquiring lock: {Name:mk11c32e346f5bc1f067dee24ee83d9969db3d82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0213 15:53:39.657344 10919 out.go:177] * Starting control plane node default-k8s-diff-port-603000 in cluster default-k8s-diff-port-603000
I0213 15:53:39.678927 10919 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
I0213 15:53:39.678993 10919 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18169-2790/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
I0213 15:53:39.679024 10919 cache.go:56] Caching tarball of preloaded images
I0213 15:53:39.679212 10919 preload.go:174] Found /Users/jenkins/minikube-integration/18169-2790/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0213 15:53:39.679234 10919 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
I0213 15:53:39.679405 10919 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/default-k8s-diff-port-603000/config.json ...
I0213 15:53:39.680419 10919 start.go:365] acquiring machines lock for default-k8s-diff-port-603000: {Name:mke947868f35224fa4aab1d5f0a66de1e12a8270 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0213 15:53:39.680530 10919 start.go:369] acquired machines lock for "default-k8s-diff-port-603000" in 86.799µs
I0213 15:53:39.680565 10919 start.go:96] Skipping create...Using existing machine configuration
I0213 15:53:39.680578 10919 fix.go:54] fixHost starting:
I0213 15:53:39.680937 10919 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0213 15:53:39.680968 10919 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0213 15:53:39.689644 10919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57420
I0213 15:53:39.690020 10919 main.go:141] libmachine: () Calling .GetVersion
I0213 15:53:39.690404 10919 main.go:141] libmachine: Using API Version 1
I0213 15:53:39.690422 10919 main.go:141] libmachine: () Calling .SetConfigRaw
I0213 15:53:39.690618 10919 main.go:141] libmachine: () Calling .GetMachineName
I0213 15:53:39.690705 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .DriverName
I0213 15:53:39.690791 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetState
I0213 15:53:39.690871 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0213 15:53:39.690932 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | hyperkit pid from json: 10848
I0213 15:53:39.691924 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | hyperkit pid 10848 missing from process table
I0213 15:53:39.691960 10919 fix.go:102] recreateIfNeeded on default-k8s-diff-port-603000: state=Stopped err=<nil>
I0213 15:53:39.691986 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .DriverName
W0213 15:53:39.692070 10919 fix.go:128] unexpected machine state, will restart: <nil>
I0213 15:53:39.734051 10919 out.go:177] * Restarting existing hyperkit VM for "default-k8s-diff-port-603000" ...
I0213 15:53:39.755198 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .Start
I0213 15:53:39.755501 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0213 15:53:39.755574 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/18169-2790/.minikube/machines/default-k8s-diff-port-603000/hyperkit.pid
I0213 15:53:39.757088 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | hyperkit pid 10848 missing from process table
I0213 15:53:39.757109 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | pid 10848 is in state "Stopped"
I0213 15:53:39.757128 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/18169-2790/.minikube/machines/default-k8s-diff-port-603000/hyperkit.pid...
I0213 15:53:39.757441 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | Using UUID 32f93a59-b32d-4016-9ad2-e6755b97ad7f
I0213 15:53:39.781929 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | Generated MAC ba:b2:e8:3b:2f:ed
I0213 15:53:39.781963 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=default-k8s-diff-port-603000
I0213 15:53:39.782101 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | 2024/02/13 15:53:39 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18169-2790/.minikube/machines/default-k8s-diff-port-603000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"32f93a59-b32d-4016-9ad2-e6755b97ad7f", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0004f7260)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18169-2790/.minikube/machines/default-k8s-diff-port-603000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18169-2790/.minikube/machines/default-k8s-diff-port-603000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18169-2790/.minikube/machines/default-k8s-diff-port-603000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pi
d:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
I0213 15:53:39.782166 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | 2024/02/13 15:53:39 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18169-2790/.minikube/machines/default-k8s-diff-port-603000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"32f93a59-b32d-4016-9ad2-e6755b97ad7f", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0004f7260)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18169-2790/.minikube/machines/default-k8s-diff-port-603000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18169-2790/.minikube/machines/default-k8s-diff-port-603000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18169-2790/.minikube/machines/default-k8s-diff-port-603000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pi
d:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
I0213 15:53:39.782205 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | 2024/02/13 15:53:39 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18169-2790/.minikube/machines/default-k8s-diff-port-603000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "32f93a59-b32d-4016-9ad2-e6755b97ad7f", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18169-2790/.minikube/machines/default-k8s-diff-port-603000/default-k8s-diff-port-603000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18169-2790/.minikube/machines/default-k8s-diff-port-603000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18169-2790/.minikube/machines/default-k8s-diff-port-603000/tty,log=/Users/jenkins/minikube-integration/18169-2790/.minikube/machines/default-k8s-diff-port-603000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18169-2790/
.minikube/machines/default-k8s-diff-port-603000/bzimage,/Users/jenkins/minikube-integration/18169-2790/.minikube/machines/default-k8s-diff-port-603000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=default-k8s-diff-port-603000"}
I0213 15:53:39.782237 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | 2024/02/13 15:53:39 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18169-2790/.minikube/machines/default-k8s-diff-port-603000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 32f93a59-b32d-4016-9ad2-e6755b97ad7f -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18169-2790/.minikube/machines/default-k8s-diff-port-603000/default-k8s-diff-port-603000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18169-2790/.minikube/machines/default-k8s-diff-port-603000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18169-2790/.minikube/machines/default-k8s-diff-port-603000/tty,log=/Users/jenkins/minikube-integration/18169-2790/.minikube/machines/default-k8s-diff-port-603000/console-ring -f kexec,/Users/jenkins/minikube-integration/18169-2790/.minikube/machines/default-k8s-diff-port-603000/bzimage,/Users
/jenkins/minikube-integration/18169-2790/.minikube/machines/default-k8s-diff-port-603000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=default-k8s-diff-port-603000"
I0213 15:53:39.782252 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | 2024/02/13 15:53:39 DEBUG: hyperkit: Redirecting stdout/stderr to logger
I0213 15:53:39.783692 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | 2024/02/13 15:53:39 DEBUG: hyperkit: Pid is 10930
I0213 15:53:39.784171 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | Attempt 0
I0213 15:53:39.784194 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0213 15:53:39.784284 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | hyperkit pid from json: 10930
I0213 15:53:39.785883 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | Searching for ba:b2:e8:3b:2f:ed in /var/db/dhcpd_leases ...
I0213 15:53:39.785972 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | Found 43 entries in /var/db/dhcpd_leases!
I0213 15:53:39.785996 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:22:34:c6:19:4c:2e ID:1,22:34:c6:19:4c:2e Lease:0x65cd524f}
I0213 15:53:39.786010 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:ba:b2:e8:3b:2f:ed ID:1,ba:b2:e8:3b:2f:ed Lease:0x65cd5243}
I0213 15:53:39.786024 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | Found match: ba:b2:e8:3b:2f:ed
I0213 15:53:39.786040 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | IP: 192.169.0.44
I0213 15:53:39.786086 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetConfigRaw
I0213 15:53:39.786713 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetIP
I0213 15:53:39.786879 10919 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/default-k8s-diff-port-603000/config.json ...
I0213 15:53:39.787229 10919 machine.go:88] provisioning docker machine ...
I0213 15:53:39.787240 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .DriverName
I0213 15:53:39.787370 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetMachineName
I0213 15:53:39.787487 10919 buildroot.go:166] provisioning hostname "default-k8s-diff-port-603000"
I0213 15:53:39.787501 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetMachineName
I0213 15:53:39.787627 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHHostname
I0213 15:53:39.787759 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHPort
I0213 15:53:39.787877 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHKeyPath
I0213 15:53:39.787981 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHKeyPath
I0213 15:53:39.788077 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHUsername
I0213 15:53:39.788507 10919 main.go:141] libmachine: Using SSH client type: native
I0213 15:53:39.788816 10919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil> [] 0s} 192.169.0.44 22 <nil> <nil>}
I0213 15:53:39.788826 10919 main.go:141] libmachine: About to run SSH command:
sudo hostname default-k8s-diff-port-603000 && echo "default-k8s-diff-port-603000" | sudo tee /etc/hostname
I0213 15:53:39.791879 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | 2024/02/13 15:53:39 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
I0213 15:53:39.800677 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | 2024/02/13 15:53:39 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18169-2790/.minikube/machines/default-k8s-diff-port-603000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
I0213 15:53:39.801580 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | 2024/02/13 15:53:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0213 15:53:39.801604 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | 2024/02/13 15:53:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0213 15:53:39.801671 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | 2024/02/13 15:53:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0213 15:53:39.801711 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | 2024/02/13 15:53:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0213 15:53:40.172267 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | 2024/02/13 15:53:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
I0213 15:53:40.172282 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | 2024/02/13 15:53:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
I0213 15:53:40.276330 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | 2024/02/13 15:53:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0213 15:53:40.276350 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | 2024/02/13 15:53:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0213 15:53:40.276362 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | 2024/02/13 15:53:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0213 15:53:40.276380 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | 2024/02/13 15:53:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0213 15:53:40.277255 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | 2024/02/13 15:53:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
I0213 15:53:40.277269 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | 2024/02/13 15:53:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
I0213 15:53:45.383748 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | 2024/02/13 15:53:45 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
I0213 15:53:45.383806 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | 2024/02/13 15:53:45 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
I0213 15:53:45.383817 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | 2024/02/13 15:53:45 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
I0213 15:53:53.129932 10919 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-603000
I0213 15:53:53.129954 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHHostname
I0213 15:53:53.130090 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHPort
I0213 15:53:53.130188 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHKeyPath
I0213 15:53:53.130301 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHKeyPath
I0213 15:53:53.130395 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHUsername
I0213 15:53:53.130523 10919 main.go:141] libmachine: Using SSH client type: native
I0213 15:53:53.130792 10919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil> [] 0s} 192.169.0.44 22 <nil> <nil>}
I0213 15:53:53.130805 10919 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sdefault-k8s-diff-port-603000' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-603000/g' /etc/hosts;
else
echo '127.0.1.1 default-k8s-diff-port-603000' | sudo tee -a /etc/hosts;
fi
fi
I0213 15:53:53.204739 10919 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0213 15:53:53.204759 10919 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18169-2790/.minikube CaCertPath:/Users/jenkins/minikube-integration/18169-2790/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18169-2790/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18169-2790/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18169-2790/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18169-2790/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18169-2790/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18169-2790/.minikube}
I0213 15:53:53.204775 10919 buildroot.go:174] setting up certificates
I0213 15:53:53.204788 10919 provision.go:83] configureAuth start
I0213 15:53:53.204799 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetMachineName
I0213 15:53:53.204942 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetIP
I0213 15:53:53.205035 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHHostname
I0213 15:53:53.205109 10919 provision.go:138] copyHostCerts
I0213 15:53:53.205198 10919 exec_runner.go:144] found /Users/jenkins/minikube-integration/18169-2790/.minikube/key.pem, removing ...
I0213 15:53:53.205209 10919 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18169-2790/.minikube/key.pem
I0213 15:53:53.205346 10919 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18169-2790/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18169-2790/.minikube/key.pem (1679 bytes)
I0213 15:53:53.205591 10919 exec_runner.go:144] found /Users/jenkins/minikube-integration/18169-2790/.minikube/ca.pem, removing ...
I0213 15:53:53.205597 10919 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18169-2790/.minikube/ca.pem
I0213 15:53:53.205675 10919 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18169-2790/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18169-2790/.minikube/ca.pem (1082 bytes)
I0213 15:53:53.205856 10919 exec_runner.go:144] found /Users/jenkins/minikube-integration/18169-2790/.minikube/cert.pem, removing ...
I0213 15:53:53.205862 10919 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18169-2790/.minikube/cert.pem
I0213 15:53:53.205935 10919 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18169-2790/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18169-2790/.minikube/cert.pem (1123 bytes)
I0213 15:53:53.206086 10919 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18169-2790/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18169-2790/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18169-2790/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-603000 san=[192.169.0.44 192.169.0.44 localhost 127.0.0.1 minikube default-k8s-diff-port-603000]
I0213 15:53:53.401997 10919 provision.go:172] copyRemoteCerts
I0213 15:53:53.402108 10919 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0213 15:53:53.402136 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHHostname
I0213 15:53:53.402328 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHPort
I0213 15:53:53.402461 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHKeyPath
I0213 15:53:53.402581 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHUsername
I0213 15:53:53.402725 10919 sshutil.go:53] new ssh client: &{IP:192.169.0.44 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18169-2790/.minikube/machines/default-k8s-diff-port-603000/id_rsa Username:docker}
I0213 15:53:53.442436 10919 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-2790/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
I0213 15:53:53.458671 10919 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-2790/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0213 15:53:53.474647 10919 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-2790/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0213 15:53:53.490406 10919 provision.go:86] duration metric: configureAuth took 285.601195ms
I0213 15:53:53.490423 10919 buildroot.go:189] setting minikube options for container-runtime
I0213 15:53:53.490566 10919 config.go:182] Loaded profile config "default-k8s-diff-port-603000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0213 15:53:53.490579 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .DriverName
I0213 15:53:53.490724 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHHostname
I0213 15:53:53.490843 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHPort
I0213 15:53:53.490955 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHKeyPath
I0213 15:53:53.491050 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHKeyPath
I0213 15:53:53.491131 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHUsername
I0213 15:53:53.491247 10919 main.go:141] libmachine: Using SSH client type: native
I0213 15:53:53.491487 10919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil> [] 0s} 192.169.0.44 22 <nil> <nil>}
I0213 15:53:53.491498 10919 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0213 15:53:53.561625 10919 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0213 15:53:53.561643 10919 buildroot.go:70] root file system type: tmpfs
I0213 15:53:53.561707 10919 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0213 15:53:53.561723 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHHostname
I0213 15:53:53.561854 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHPort
I0213 15:53:53.561947 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHKeyPath
I0213 15:53:53.562035 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHKeyPath
I0213 15:53:53.562115 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHUsername
I0213 15:53:53.562252 10919 main.go:141] libmachine: Using SSH client type: native
I0213 15:53:53.562507 10919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil> [] 0s} 192.169.0.44 22 <nil> <nil>}
I0213 15:53:53.562559 10919 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0213 15:53:53.638952 10919 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0213 15:53:53.638977 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHHostname
I0213 15:53:53.639129 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHPort
I0213 15:53:53.639229 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHKeyPath
I0213 15:53:53.639323 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHKeyPath
I0213 15:53:53.639414 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHUsername
I0213 15:53:53.639534 10919 main.go:141] libmachine: Using SSH client type: native
I0213 15:53:53.639789 10919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil> [] 0s} 192.169.0.44 22 <nil> <nil>}
I0213 15:53:53.639805 10919 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0213 15:53:54.256176 10919 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0213 15:53:54.256192 10919 machine.go:91] provisioned docker machine in 14.312422836s
I0213 15:53:54.256204 10919 start.go:300] post-start starting for "default-k8s-diff-port-603000" (driver="hyperkit")
I0213 15:53:54.256211 10919 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0213 15:53:54.256223 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .DriverName
I0213 15:53:54.256409 10919 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0213 15:53:54.256421 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHHostname
I0213 15:53:54.256510 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHPort
I0213 15:53:54.256596 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHKeyPath
I0213 15:53:54.256686 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHUsername
I0213 15:53:54.256771 10919 sshutil.go:53] new ssh client: &{IP:192.169.0.44 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18169-2790/.minikube/machines/default-k8s-diff-port-603000/id_rsa Username:docker}
I0213 15:53:54.296038 10919 ssh_runner.go:195] Run: cat /etc/os-release
I0213 15:53:54.298916 10919 info.go:137] Remote host: Buildroot 2021.02.12
I0213 15:53:54.298931 10919 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18169-2790/.minikube/addons for local assets ...
I0213 15:53:54.299025 10919 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18169-2790/.minikube/files for local assets ...
I0213 15:53:54.299619 10919 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18169-2790/.minikube/files/etc/ssl/certs/33422.pem -> 33422.pem in /etc/ssl/certs
I0213 15:53:54.299828 10919 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0213 15:53:54.305755 10919 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-2790/.minikube/files/etc/ssl/certs/33422.pem --> /etc/ssl/certs/33422.pem (1708 bytes)
I0213 15:53:54.322584 10919 start.go:303] post-start completed in 66.370514ms
I0213 15:53:54.322596 10919 fix.go:56] fixHost completed within 14.485492524s
I0213 15:53:54.322613 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHHostname
I0213 15:53:54.322749 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHPort
I0213 15:53:54.322846 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHKeyPath
I0213 15:53:54.322923 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHKeyPath
I0213 15:53:54.323044 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHUsername
I0213 15:53:54.323167 10919 main.go:141] libmachine: Using SSH client type: native
I0213 15:53:54.323414 10919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil> [] 0s} 192.169.0.44 22 <nil> <nil>}
I0213 15:53:54.323422 10919 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0213 15:53:54.392058 10919 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707868434.040583241
I0213 15:53:54.392074 10919 fix.go:206] guest clock: 1707868434.040583241
I0213 15:53:54.392079 10919 fix.go:219] Guest: 2024-02-13 15:53:54.040583241 -0800 PST Remote: 2024-02-13 15:53:54.322599 -0800 PST m=+15.178802027 (delta=-282.015759ms)
I0213 15:53:54.392097 10919 fix.go:190] guest clock delta is within tolerance: -282.015759ms
I0213 15:53:54.392105 10919 start.go:83] releasing machines lock for "default-k8s-diff-port-603000", held for 14.555032528s
I0213 15:53:54.392123 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .DriverName
I0213 15:53:54.392252 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetIP
I0213 15:53:54.392355 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .DriverName
I0213 15:53:54.392633 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .DriverName
I0213 15:53:54.392722 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .DriverName
I0213 15:53:54.392913 10919 ssh_runner.go:195] Run: cat /version.json
I0213 15:53:54.392925 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHHostname
I0213 15:53:54.393010 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHPort
I0213 15:53:54.393100 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHKeyPath
I0213 15:53:54.393183 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHUsername
I0213 15:53:54.393267 10919 sshutil.go:53] new ssh client: &{IP:192.169.0.44 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18169-2790/.minikube/machines/default-k8s-diff-port-603000/id_rsa Username:docker}
I0213 15:53:54.393298 10919 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0213 15:53:54.393327 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHHostname
I0213 15:53:54.393405 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHPort
I0213 15:53:54.393491 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHKeyPath
I0213 15:53:54.393568 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHUsername
I0213 15:53:54.393653 10919 sshutil.go:53] new ssh client: &{IP:192.169.0.44 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18169-2790/.minikube/machines/default-k8s-diff-port-603000/id_rsa Username:docker}
I0213 15:53:54.429400 10919 ssh_runner.go:195] Run: systemctl --version
I0213 15:53:54.433656 10919 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0213 15:53:54.478518 10919 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0213 15:53:54.478596 10919 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0213 15:53:54.489284 10919 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0213 15:53:54.489301 10919 start.go:475] detecting cgroup driver to use...
I0213 15:53:54.489412 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0213 15:53:54.501085 10919 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0213 15:53:54.507772 10919 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0213 15:53:54.514300 10919 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0213 15:53:54.514354 10919 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0213 15:53:54.520792 10919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0213 15:53:54.527695 10919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0213 15:53:54.534329 10919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0213 15:53:54.540993 10919 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0213 15:53:54.547829 10919 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0213 15:53:54.554403 10919 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0213 15:53:54.560497 10919 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0213 15:53:54.566471 10919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0213 15:53:54.647368 10919 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0213 15:53:54.660182 10919 start.go:475] detecting cgroup driver to use...
I0213 15:53:54.660255 10919 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0213 15:53:54.670561 10919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0213 15:53:54.681745 10919 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0213 15:53:54.696011 10919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0213 15:53:54.704921 10919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0213 15:53:54.713909 10919 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0213 15:53:54.735393 10919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0213 15:53:54.744506 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0213 15:53:54.757363 10919 ssh_runner.go:195] Run: which cri-dockerd
I0213 15:53:54.759672 10919 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0213 15:53:54.765360 10919 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0213 15:53:54.776919 10919 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0213 15:53:54.862744 10919 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0213 15:53:54.950800 10919 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0213 15:53:54.950870 10919 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0213 15:53:54.962250 10919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0213 15:53:55.048693 10919 ssh_runner.go:195] Run: sudo systemctl restart docker
I0213 15:53:56.361309 10919 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.31258068s)
I0213 15:53:56.361374 10919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0213 15:53:56.370115 10919 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
I0213 15:53:56.380123 10919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0213 15:53:56.389025 10919 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0213 15:53:56.473016 10919 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0213 15:53:56.562333 10919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0213 15:53:56.647853 10919 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0213 15:53:56.658711 10919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0213 15:53:56.667781 10919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0213 15:53:56.753140 10919 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0213 15:53:56.807267 10919 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0213 15:53:56.807343 10919 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0213 15:53:56.811238 10919 start.go:543] Will wait 60s for crictl version
I0213 15:53:56.811287 10919 ssh_runner.go:195] Run: which crictl
I0213 15:53:56.813700 10919 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0213 15:53:56.848539 10919 start.go:559] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 24.0.7
RuntimeApiVersion: v1
I0213 15:53:56.848619 10919 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0213 15:53:56.865144 10919 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0213 15:53:56.925620 10919 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
I0213 15:53:56.925646 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetIP
I0213 15:53:56.925846 10919 ssh_runner.go:195] Run: grep 192.169.0.1 host.minikube.internal$ /etc/hosts
I0213 15:53:56.928316 10919 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0213 15:53:56.936061 10919 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
I0213 15:53:56.936126 10919 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0213 15:53:56.949192 10919 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/coredns/coredns:v1.10.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
-- /stdout --
I0213 15:53:56.949213 10919 docker.go:615] Images already preloaded, skipping extraction
I0213 15:53:56.949281 10919 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0213 15:53:56.962613 10919 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/coredns/coredns:v1.10.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
-- /stdout --
I0213 15:53:56.962637 10919 cache_images.go:84] Images are preloaded, skipping loading
I0213 15:53:56.962709 10919 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0213 15:53:56.981130 10919 cni.go:84] Creating CNI manager for ""
I0213 15:53:56.981144 10919 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0213 15:53:56.981157 10919 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0213 15:53:56.981171 10919 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.44 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-603000 NodeName:default-k8s-diff-port-603000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.44"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.44 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0213 15:53:56.981263 10919 kubeadm.go:181] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.169.0.44
bindPort: 8444
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "default-k8s-diff-port-603000"
kubeletExtraArgs:
node-ip: 192.169.0.44
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.169.0.44"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8444
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.28.4
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0213 15:53:56.981327 10919 kubeadm.go:976] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=default-k8s-diff-port-603000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.44
[Install]
config:
{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-603000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
I0213 15:53:56.981386 10919 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
I0213 15:53:56.987861 10919 binaries.go:44] Found k8s binaries, skipping transfer
I0213 15:53:56.987909 10919 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0213 15:53:56.994107 10919 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (389 bytes)
I0213 15:53:57.005074 10919 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0213 15:53:57.015904 10919 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2111 bytes)
I0213 15:53:57.027200 10919 ssh_runner.go:195] Run: grep 192.169.0.44 control-plane.minikube.internal$ /etc/hosts
I0213 15:53:57.029423 10919 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.44 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0213 15:53:57.037278 10919 certs.go:56] Setting up /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/default-k8s-diff-port-603000 for IP: 192.169.0.44
I0213 15:53:57.037296 10919 certs.go:190] acquiring lock for shared ca certs: {Name:mkbda05235901fe7fd4e84a9c5103764710e2c54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0213 15:53:57.037475 10919 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18169-2790/.minikube/ca.key
I0213 15:53:57.037544 10919 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18169-2790/.minikube/proxy-client-ca.key
I0213 15:53:57.037657 10919 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/default-k8s-diff-port-603000/client.key
I0213 15:53:57.037736 10919 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/default-k8s-diff-port-603000/apiserver.key.46a96eba
I0213 15:53:57.037813 10919 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/default-k8s-diff-port-603000/proxy-client.key
I0213 15:53:57.038074 10919 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-2790/.minikube/certs/Users/jenkins/minikube-integration/18169-2790/.minikube/certs/3342.pem (1338 bytes)
W0213 15:53:57.038126 10919 certs.go:433] ignoring /Users/jenkins/minikube-integration/18169-2790/.minikube/certs/Users/jenkins/minikube-integration/18169-2790/.minikube/certs/3342_empty.pem, impossibly tiny 0 bytes
I0213 15:53:57.038135 10919 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-2790/.minikube/certs/Users/jenkins/minikube-integration/18169-2790/.minikube/certs/ca-key.pem (1679 bytes)
I0213 15:53:57.038177 10919 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-2790/.minikube/certs/Users/jenkins/minikube-integration/18169-2790/.minikube/certs/ca.pem (1082 bytes)
I0213 15:53:57.038206 10919 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-2790/.minikube/certs/Users/jenkins/minikube-integration/18169-2790/.minikube/certs/cert.pem (1123 bytes)
I0213 15:53:57.038238 10919 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-2790/.minikube/certs/Users/jenkins/minikube-integration/18169-2790/.minikube/certs/key.pem (1679 bytes)
I0213 15:53:57.038303 10919 certs.go:437] found cert: /Users/jenkins/minikube-integration/18169-2790/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18169-2790/.minikube/files/etc/ssl/certs/33422.pem (1708 bytes)
I0213 15:53:57.038838 10919 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/default-k8s-diff-port-603000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0213 15:53:57.055057 10919 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/default-k8s-diff-port-603000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0213 15:53:57.070955 10919 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/default-k8s-diff-port-603000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0213 15:53:57.086776 10919 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/default-k8s-diff-port-603000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0213 15:53:57.102413 10919 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-2790/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0213 15:53:57.118106 10919 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-2790/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0213 15:53:57.133559 10919 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-2790/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0213 15:53:57.149457 10919 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-2790/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0213 15:53:57.165386 10919 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-2790/.minikube/files/etc/ssl/certs/33422.pem --> /usr/share/ca-certificates/33422.pem (1708 bytes)
I0213 15:53:57.180909 10919 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-2790/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0213 15:53:57.196600 10919 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18169-2790/.minikube/certs/3342.pem --> /usr/share/ca-certificates/3342.pem (1338 bytes)
I0213 15:53:57.211898 10919 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0213 15:53:57.223154 10919 ssh_runner.go:195] Run: openssl version
I0213 15:53:57.226592 10919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/33422.pem && ln -fs /usr/share/ca-certificates/33422.pem /etc/ssl/certs/33422.pem"
I0213 15:53:57.233826 10919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/33422.pem
I0213 15:53:57.236759 10919 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 22:57 /usr/share/ca-certificates/33422.pem
I0213 15:53:57.236799 10919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/33422.pem
I0213 15:53:57.240202 10919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/33422.pem /etc/ssl/certs/3ec20f2e.0"
I0213 15:53:57.247175 10919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0213 15:53:57.254335 10919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0213 15:53:57.257324 10919 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 22:50 /usr/share/ca-certificates/minikubeCA.pem
I0213 15:53:57.257364 10919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0213 15:53:57.260952 10919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0213 15:53:57.268141 10919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3342.pem && ln -fs /usr/share/ca-certificates/3342.pem /etc/ssl/certs/3342.pem"
I0213 15:53:57.275653 10919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3342.pem
I0213 15:53:57.278571 10919 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 22:57 /usr/share/ca-certificates/3342.pem
I0213 15:53:57.278617 10919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3342.pem
I0213 15:53:57.282147 10919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3342.pem /etc/ssl/certs/51391683.0"
I0213 15:53:57.289147 10919 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
I0213 15:53:57.291825 10919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0213 15:53:57.295407 10919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0213 15:53:57.298948 10919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0213 15:53:57.302451 10919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0213 15:53:57.305905 10919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0213 15:53:57.309496 10919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0213 15:53:57.313095 10919 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-603000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.169.0.44 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPor
ts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
I0213 15:53:57.313191 10919 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0213 15:53:57.325815 10919 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0213 15:53:57.332190 10919 kubeadm.go:419] found existing configuration files, will attempt cluster restart
I0213 15:53:57.332204 10919 kubeadm.go:636] restartCluster start
I0213 15:53:57.332252 10919 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0213 15:53:57.338407 10919 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0213 15:53:57.338972 10919 kubeconfig.go:135] verify returned: extract IP: "default-k8s-diff-port-603000" does not appear in /Users/jenkins/minikube-integration/18169-2790/kubeconfig
I0213 15:53:57.339268 10919 kubeconfig.go:146] "default-k8s-diff-port-603000" context is missing from /Users/jenkins/minikube-integration/18169-2790/kubeconfig - will repair!
I0213 15:53:57.339802 10919 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-2790/kubeconfig: {Name:mkf6bdf8196211b20577d90f94d0007015c44956 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0213 15:53:57.341478 10919 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0213 15:53:57.347588 10919 api_server.go:166] Checking apiserver status ...
I0213 15:53:57.347638 10919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0213 15:53:57.356207 10919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0213 15:53:57.849138 10919 api_server.go:166] Checking apiserver status ...
I0213 15:53:57.849284 10919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0213 15:53:57.859026 10919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0213 15:53:58.347944 10919 api_server.go:166] Checking apiserver status ...
I0213 15:53:58.348042 10919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0213 15:53:58.357033 10919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0213 15:53:58.848065 10919 api_server.go:166] Checking apiserver status ...
I0213 15:53:58.848247 10919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0213 15:53:58.857167 10919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0213 15:53:59.348553 10919 api_server.go:166] Checking apiserver status ...
I0213 15:53:59.348621 10919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0213 15:53:59.356963 10919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0213 15:53:59.848691 10919 api_server.go:166] Checking apiserver status ...
I0213 15:53:59.848816 10919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0213 15:53:59.857660 10919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0213 15:54:00.347967 10919 api_server.go:166] Checking apiserver status ...
I0213 15:54:00.348088 10919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0213 15:54:00.357528 10919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0213 15:54:00.847798 10919 api_server.go:166] Checking apiserver status ...
I0213 15:54:00.847904 10919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0213 15:54:00.858156 10919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0213 15:54:01.349046 10919 api_server.go:166] Checking apiserver status ...
I0213 15:54:01.349141 10919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0213 15:54:01.357799 10919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0213 15:54:01.848407 10919 api_server.go:166] Checking apiserver status ...
I0213 15:54:01.848523 10919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0213 15:54:01.857982 10919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0213 15:54:02.347851 10919 api_server.go:166] Checking apiserver status ...
I0213 15:54:02.347953 10919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0213 15:54:02.356556 10919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0213 15:54:02.848314 10919 api_server.go:166] Checking apiserver status ...
I0213 15:54:02.848399 10919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0213 15:54:02.857593 10919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0213 15:54:03.348073 10919 api_server.go:166] Checking apiserver status ...
I0213 15:54:03.348236 10919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0213 15:54:03.357705 10919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0213 15:54:03.847776 10919 api_server.go:166] Checking apiserver status ...
I0213 15:54:03.847866 10919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0213 15:54:03.856640 10919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0213 15:54:04.348125 10919 api_server.go:166] Checking apiserver status ...
I0213 15:54:04.348207 10919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0213 15:54:04.357207 10919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0213 15:54:04.848413 10919 api_server.go:166] Checking apiserver status ...
I0213 15:54:04.848510 10919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0213 15:54:04.858765 10919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0213 15:54:05.347784 10919 api_server.go:166] Checking apiserver status ...
I0213 15:54:05.347853 10919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0213 15:54:05.356465 10919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0213 15:54:05.849535 10919 api_server.go:166] Checking apiserver status ...
I0213 15:54:05.849643 10919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0213 15:54:05.858336 10919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0213 15:54:06.348297 10919 api_server.go:166] Checking apiserver status ...
I0213 15:54:06.348394 10919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0213 15:54:06.357691 10919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0213 15:54:06.849840 10919 api_server.go:166] Checking apiserver status ...
I0213 15:54:06.849958 10919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0213 15:54:06.859747 10919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0213 15:54:07.347897 10919 api_server.go:166] Checking apiserver status ...
I0213 15:54:07.347995 10919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0213 15:54:07.356960 10919 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0213 15:54:07.356975 10919 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
I0213 15:54:07.356989 10919 kubeadm.go:1135] stopping kube-system containers ...
I0213 15:54:07.357062 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0213 15:54:07.371229 10919 docker.go:483] Stopping containers: [d6d6bfc7e46f 441ed6acf02b 8fc06faf5daf cf10ab2a5821 7f2607327b51 166517917ebe fd971f8d8ee1 7516f7e0a7b3 33e10bdbe674 69aac7de960e f753753d8cc1 9108c4562e79 cad1f358b39e 68fe27be6ecf d475108f0118]
I0213 15:54:07.371307 10919 ssh_runner.go:195] Run: docker stop d6d6bfc7e46f 441ed6acf02b 8fc06faf5daf cf10ab2a5821 7f2607327b51 166517917ebe fd971f8d8ee1 7516f7e0a7b3 33e10bdbe674 69aac7de960e f753753d8cc1 9108c4562e79 cad1f358b39e 68fe27be6ecf d475108f0118
I0213 15:54:07.384947 10919 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0213 15:54:07.396080 10919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0213 15:54:07.402566 10919 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0213 15:54:07.402612 10919 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0213 15:54:07.408981 10919 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0213 15:54:07.408990 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0213 15:54:07.480653 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0213 15:54:08.092528 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0213 15:54:08.226801 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0213 15:54:08.291048 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0213 15:54:08.353803 10919 api_server.go:52] waiting for apiserver process to appear ...
I0213 15:54:08.353867 10919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0213 15:54:08.855209 10919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0213 15:54:09.354261 10919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0213 15:54:09.853975 10919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0213 15:54:09.878931 10919 api_server.go:72] duration metric: took 1.525095463s to wait for apiserver process to appear ...
I0213 15:54:09.878951 10919 api_server.go:88] waiting for apiserver healthz status ...
I0213 15:54:09.878977 10919 api_server.go:253] Checking apiserver healthz at https://192.169.0.44:8444/healthz ...
I0213 15:54:12.727324 10919 api_server.go:279] https://192.169.0.44:8444/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0213 15:54:12.727343 10919 api_server.go:103] status: https://192.169.0.44:8444/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0213 15:54:12.727354 10919 api_server.go:253] Checking apiserver healthz at https://192.169.0.44:8444/healthz ...
I0213 15:54:12.756415 10919 api_server.go:279] https://192.169.0.44:8444/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0213 15:54:12.756434 10919 api_server.go:103] status: https://192.169.0.44:8444/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0213 15:54:12.879358 10919 api_server.go:253] Checking apiserver healthz at https://192.169.0.44:8444/healthz ...
I0213 15:54:12.883398 10919 api_server.go:279] https://192.169.0.44:8444/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
W0213 15:54:12.883419 10919 api_server.go:103] status: https://192.169.0.44:8444/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
I0213 15:54:13.380763 10919 api_server.go:253] Checking apiserver healthz at https://192.169.0.44:8444/healthz ...
I0213 15:54:13.387534 10919 api_server.go:279] https://192.169.0.44:8444/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
W0213 15:54:13.387551 10919 api_server.go:103] status: https://192.169.0.44:8444/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
[+]poststarthook/apiservice-discovery-controller ok
healthz check failed
I0213 15:54:13.879253 10919 api_server.go:253] Checking apiserver healthz at https://192.169.0.44:8444/healthz ...
I0213 15:54:13.882740 10919 api_server.go:279] https://192.169.0.44:8444/healthz returned 200:
ok
I0213 15:54:13.889935 10919 api_server.go:141] control plane version: v1.28.4
I0213 15:54:13.889954 10919 api_server.go:131] duration metric: took 4.010913173s to wait for apiserver health ...
I0213 15:54:13.889963 10919 cni.go:84] Creating CNI manager for ""
I0213 15:54:13.889973 10919 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0213 15:54:13.912590 10919 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0213 15:54:13.948179 10919 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0213 15:54:13.962418 10919 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
I0213 15:54:14.009894 10919 system_pods.go:43] waiting for kube-system pods to appear ...
I0213 15:54:14.016091 10919 system_pods.go:59] 8 kube-system pods found
I0213 15:54:14.016111 10919 system_pods.go:61] "coredns-5dd5756b68-7gs8v" [4dc98c1d-8765-47ea-9752-6280bc1ebde6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0213 15:54:14.016117 10919 system_pods.go:61] "etcd-default-k8s-diff-port-603000" [9d7d4f7d-a734-42d5-bc35-ee865aeb2554] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0213 15:54:14.016126 10919 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-603000" [c9ed8b20-609f-46fc-be78-c742486752de] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0213 15:54:14.016133 10919 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-603000" [097bbce7-da9a-4750-b41d-5df3d8d30af2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0213 15:54:14.016137 10919 system_pods.go:61] "kube-proxy-jc5nj" [e59cad79-d49b-456a-8ad6-d7915ffab536] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0213 15:54:14.016146 10919 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-603000" [14ffc876-351f-4cff-9baa-f256677af78d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0213 15:54:14.016155 10919 system_pods.go:61] "metrics-server-57f55c9bc5-24wh6" [ada20ec6-8771-4dcf-bf09-c630c1ffac78] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0213 15:54:14.016160 10919 system_pods.go:61] "storage-provisioner" [689fdbbd-483e-4345-95cb-c566fbbaf8d1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0213 15:54:14.016165 10919 system_pods.go:74] duration metric: took 6.260768ms to wait for pod list to return data ...
I0213 15:54:14.016171 10919 node_conditions.go:102] verifying NodePressure condition ...
I0213 15:54:14.018312 10919 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0213 15:54:14.018329 10919 node_conditions.go:123] node cpu capacity is 2
I0213 15:54:14.018346 10919 node_conditions.go:105] duration metric: took 2.171633ms to run NodePressure ...
I0213 15:54:14.018357 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0213 15:54:14.248512 10919 kubeadm.go:772] waiting for restarted kubelet to initialise ...
I0213 15:54:14.252486 10919 kubeadm.go:787] kubelet initialised
I0213 15:54:14.252498 10919 kubeadm.go:788] duration metric: took 3.972519ms waiting for restarted kubelet to initialise ...
I0213 15:54:14.252509 10919 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0213 15:54:14.256939 10919 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-7gs8v" in "kube-system" namespace to be "Ready" ...
I0213 15:54:14.261215 10919 pod_ready.go:97] node "default-k8s-diff-port-603000" hosting pod "coredns-5dd5756b68-7gs8v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-603000" has status "Ready":"False"
I0213 15:54:14.261233 10919 pod_ready.go:81] duration metric: took 4.281981ms waiting for pod "coredns-5dd5756b68-7gs8v" in "kube-system" namespace to be "Ready" ...
E0213 15:54:14.261240 10919 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-603000" hosting pod "coredns-5dd5756b68-7gs8v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-603000" has status "Ready":"False"
I0213 15:54:14.261248 10919 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-603000" in "kube-system" namespace to be "Ready" ...
I0213 15:54:14.265678 10919 pod_ready.go:97] node "default-k8s-diff-port-603000" hosting pod "etcd-default-k8s-diff-port-603000" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-603000" has status "Ready":"False"
I0213 15:54:14.265690 10919 pod_ready.go:81] duration metric: took 4.435853ms waiting for pod "etcd-default-k8s-diff-port-603000" in "kube-system" namespace to be "Ready" ...
E0213 15:54:14.265702 10919 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-603000" hosting pod "etcd-default-k8s-diff-port-603000" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-603000" has status "Ready":"False"
I0213 15:54:14.265707 10919 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-603000" in "kube-system" namespace to be "Ready" ...
I0213 15:54:14.269552 10919 pod_ready.go:97] node "default-k8s-diff-port-603000" hosting pod "kube-apiserver-default-k8s-diff-port-603000" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-603000" has status "Ready":"False"
I0213 15:54:14.269567 10919 pod_ready.go:81] duration metric: took 3.854228ms waiting for pod "kube-apiserver-default-k8s-diff-port-603000" in "kube-system" namespace to be "Ready" ...
E0213 15:54:14.269575 10919 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-603000" hosting pod "kube-apiserver-default-k8s-diff-port-603000" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-603000" has status "Ready":"False"
I0213 15:54:14.269580 10919 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-603000" in "kube-system" namespace to be "Ready" ...
I0213 15:54:14.413611 10919 pod_ready.go:97] node "default-k8s-diff-port-603000" hosting pod "kube-controller-manager-default-k8s-diff-port-603000" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-603000" has status "Ready":"False"
I0213 15:54:14.413626 10919 pod_ready.go:81] duration metric: took 144.036938ms waiting for pod "kube-controller-manager-default-k8s-diff-port-603000" in "kube-system" namespace to be "Ready" ...
E0213 15:54:14.413635 10919 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-603000" hosting pod "kube-controller-manager-default-k8s-diff-port-603000" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-603000" has status "Ready":"False"
I0213 15:54:14.413641 10919 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jc5nj" in "kube-system" namespace to be "Ready" ...
I0213 15:54:14.813347 10919 pod_ready.go:97] node "default-k8s-diff-port-603000" hosting pod "kube-proxy-jc5nj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-603000" has status "Ready":"False"
I0213 15:54:14.813361 10919 pod_ready.go:81] duration metric: took 399.706684ms waiting for pod "kube-proxy-jc5nj" in "kube-system" namespace to be "Ready" ...
E0213 15:54:14.813368 10919 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-603000" hosting pod "kube-proxy-jc5nj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-603000" has status "Ready":"False"
I0213 15:54:14.813374 10919 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-603000" in "kube-system" namespace to be "Ready" ...
I0213 15:54:15.212766 10919 pod_ready.go:97] node "default-k8s-diff-port-603000" hosting pod "kube-scheduler-default-k8s-diff-port-603000" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-603000" has status "Ready":"False"
I0213 15:54:15.212779 10919 pod_ready.go:81] duration metric: took 399.389844ms waiting for pod "kube-scheduler-default-k8s-diff-port-603000" in "kube-system" namespace to be "Ready" ...
E0213 15:54:15.212789 10919 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-603000" hosting pod "kube-scheduler-default-k8s-diff-port-603000" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-603000" has status "Ready":"False"
I0213 15:54:15.212794 10919 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace to be "Ready" ...
I0213 15:54:15.612545 10919 pod_ready.go:97] node "default-k8s-diff-port-603000" hosting pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-603000" has status "Ready":"False"
I0213 15:54:15.612559 10919 pod_ready.go:81] duration metric: took 399.749728ms waiting for pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace to be "Ready" ...
E0213 15:54:15.612567 10919 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-603000" hosting pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-603000" has status "Ready":"False"
I0213 15:54:15.612573 10919 pod_ready.go:38] duration metric: took 1.360026027s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0213 15:54:15.612589 10919 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0213 15:54:15.621621 10919 ops.go:34] apiserver oom_adj: -16
I0213 15:54:15.621635 10919 kubeadm.go:640] restartCluster took 18.289078206s
I0213 15:54:15.621640 10919 kubeadm.go:406] StartCluster complete in 18.308205476s
I0213 15:54:15.621651 10919 settings.go:142] acquiring lock: {Name:mk2b7626a62b7e77e2709adebde10f119ed0f449 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0213 15:54:15.621731 10919 settings.go:150] Updating kubeconfig: /Users/jenkins/minikube-integration/18169-2790/kubeconfig
I0213 15:54:15.622669 10919 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18169-2790/kubeconfig: {Name:mkf6bdf8196211b20577d90f94d0007015c44956 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0213 15:54:15.622943 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0213 15:54:15.622971 10919 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
I0213 15:54:15.623011 10919 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-603000"
I0213 15:54:15.623018 10919 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-603000"
I0213 15:54:15.623024 10919 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-603000"
W0213 15:54:15.623030 10919 addons.go:243] addon storage-provisioner should already be in state true
I0213 15:54:15.623030 10919 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-603000"
I0213 15:54:15.623047 10919 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-603000"
W0213 15:54:15.623065 10919 addons.go:243] addon metrics-server should already be in state true
I0213 15:54:15.623065 10919 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-603000"
I0213 15:54:15.623074 10919 host.go:66] Checking if "default-k8s-diff-port-603000" exists ...
I0213 15:54:15.623088 10919 config.go:182] Loaded profile config "default-k8s-diff-port-603000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0213 15:54:15.623101 10919 host.go:66] Checking if "default-k8s-diff-port-603000" exists ...
I0213 15:54:15.623099 10919 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-603000"
I0213 15:54:15.623117 10919 addons.go:234] Setting addon dashboard=true in "default-k8s-diff-port-603000"
W0213 15:54:15.623124 10919 addons.go:243] addon dashboard should already be in state true
I0213 15:54:15.623174 10919 host.go:66] Checking if "default-k8s-diff-port-603000" exists ...
I0213 15:54:15.623957 10919 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0213 15:54:15.623995 10919 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0213 15:54:15.624032 10919 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0213 15:54:15.624091 10919 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0213 15:54:15.624111 10919 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0213 15:54:15.624338 10919 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0213 15:54:15.625460 10919 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0213 15:54:15.625656 10919 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0213 15:54:15.634832 10919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57445
I0213 15:54:15.635252 10919 main.go:141] libmachine: () Calling .GetVersion
I0213 15:54:15.635646 10919 main.go:141] libmachine: Using API Version 1
I0213 15:54:15.635660 10919 main.go:141] libmachine: () Calling .SetConfigRaw
I0213 15:54:15.635938 10919 main.go:141] libmachine: () Calling .GetMachineName
I0213 15:54:15.636161 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetState
I0213 15:54:15.636232 10919 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-603000" context rescaled to 1 replicas
I0213 15:54:15.636255 10919 start.go:223] Will wait 6m0s for node &{Name: IP:192.169.0.44 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
I0213 15:54:15.658032 10919 out.go:177] * Verifying Kubernetes components...
I0213 15:54:15.636312 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0213 15:54:15.637178 10919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57447
I0213 15:54:15.699039 10919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0213 15:54:15.638611 10919 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-603000"
W0213 15:54:15.699075 10919 addons.go:243] addon default-storageclass should already be in state true
I0213 15:54:15.639646 10919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57448
I0213 15:54:15.699107 10919 host.go:66] Checking if "default-k8s-diff-port-603000" exists ...
I0213 15:54:15.640503 10919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57449
I0213 15:54:15.658067 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | hyperkit pid from json: 10930
I0213 15:54:15.699595 10919 main.go:141] libmachine: () Calling .GetVersion
I0213 15:54:15.699698 10919 main.go:141] libmachine: () Calling .GetVersion
I0213 15:54:15.699747 10919 main.go:141] libmachine: () Calling .GetVersion
I0213 15:54:15.699759 10919 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0213 15:54:15.699795 10919 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0213 15:54:15.700914 10919 main.go:141] libmachine: Using API Version 1
I0213 15:54:15.701100 10919 main.go:141] libmachine: () Calling .SetConfigRaw
I0213 15:54:15.701492 10919 main.go:141] libmachine: Using API Version 1
I0213 15:54:15.701503 10919 main.go:141] libmachine: Using API Version 1
I0213 15:54:15.701520 10919 main.go:141] libmachine: () Calling .SetConfigRaw
I0213 15:54:15.701524 10919 main.go:141] libmachine: () Calling .SetConfigRaw
I0213 15:54:15.701723 10919 main.go:141] libmachine: () Calling .GetMachineName
I0213 15:54:15.701867 10919 main.go:141] libmachine: () Calling .GetMachineName
I0213 15:54:15.701939 10919 main.go:141] libmachine: () Calling .GetMachineName
I0213 15:54:15.702790 10919 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0213 15:54:15.703348 10919 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0213 15:54:15.703515 10919 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0213 15:54:15.703531 10919 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0213 15:54:15.703555 10919 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0213 15:54:15.703611 10919 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0213 15:54:15.712515 10919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57453
I0213 15:54:15.713151 10919 main.go:141] libmachine: () Calling .GetVersion
I0213 15:54:15.713659 10919 main.go:141] libmachine: Using API Version 1
I0213 15:54:15.713674 10919 main.go:141] libmachine: () Calling .SetConfigRaw
I0213 15:54:15.714006 10919 main.go:141] libmachine: () Calling .GetMachineName
I0213 15:54:15.714607 10919 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0213 15:54:15.714635 10919 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0213 15:54:15.715520 10919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57455
I0213 15:54:15.717630 10919 main.go:141] libmachine: () Calling .GetVersion
I0213 15:54:15.716788 10919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57457
I0213 15:54:15.718041 10919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57458
I0213 15:54:15.718255 10919 main.go:141] libmachine: Using API Version 1
I0213 15:54:15.718275 10919 main.go:141] libmachine: () Calling .SetConfigRaw
I0213 15:54:15.718448 10919 main.go:141] libmachine: () Calling .GetVersion
I0213 15:54:15.718489 10919 main.go:141] libmachine: () Calling .GetVersion
I0213 15:54:15.718545 10919 main.go:141] libmachine: () Calling .GetMachineName
I0213 15:54:15.718682 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetState
I0213 15:54:15.718864 10919 main.go:141] libmachine: Using API Version 1
I0213 15:54:15.718883 10919 main.go:141] libmachine: () Calling .SetConfigRaw
I0213 15:54:15.718863 10919 main.go:141] libmachine: Using API Version 1
I0213 15:54:15.718896 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0213 15:54:15.718904 10919 main.go:141] libmachine: () Calling .SetConfigRaw
I0213 15:54:15.718907 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | hyperkit pid from json: 10930
I0213 15:54:15.719141 10919 main.go:141] libmachine: () Calling .GetMachineName
I0213 15:54:15.719168 10919 main.go:141] libmachine: () Calling .GetMachineName
I0213 15:54:15.719344 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetState
I0213 15:54:15.719375 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetState
I0213 15:54:15.719477 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0213 15:54:15.719494 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0213 15:54:15.719571 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | hyperkit pid from json: 10930
I0213 15:54:15.719592 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | hyperkit pid from json: 10930
I0213 15:54:15.720114 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .DriverName
I0213 15:54:15.741194 10919 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0213 15:54:15.720598 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .DriverName
I0213 15:54:15.720660 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .DriverName
I0213 15:54:15.724118 10919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57461
I0213 15:54:15.762239 10919 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0213 15:54:15.762251 10919 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0213 15:54:15.782982 10919 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0213 15:54:15.762269 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHHostname
I0213 15:54:15.762678 10919 main.go:141] libmachine: () Calling .GetVersion
I0213 15:54:15.771869 10919 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-603000" to be "Ready" ...
I0213 15:54:15.772056 10919 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0213 15:54:15.804421 10919 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0213 15:54:15.804536 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHPort
I0213 15:54:15.825164 10919 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0213 15:54:15.825187 10919 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0213 15:54:15.825312 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHKeyPath
I0213 15:54:15.825686 10919 main.go:141] libmachine: Using API Version 1
I0213 15:54:15.846043 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHHostname
I0213 15:54:15.867181 10919 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0213 15:54:15.867200 10919 main.go:141] libmachine: () Calling .SetConfigRaw
I0213 15:54:15.888251 10919 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0213 15:54:15.888263 10919 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0213 15:54:15.867339 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHUsername
I0213 15:54:15.888276 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHHostname
I0213 15:54:15.867350 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHPort
I0213 15:54:15.888477 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHPort
I0213 15:54:15.888521 10919 sshutil.go:53] new ssh client: &{IP:192.169.0.44 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18169-2790/.minikube/machines/default-k8s-diff-port-603000/id_rsa Username:docker}
I0213 15:54:15.888543 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHKeyPath
I0213 15:54:15.888653 10919 main.go:141] libmachine: () Calling .GetMachineName
I0213 15:54:15.888696 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHKeyPath
I0213 15:54:15.888736 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHUsername
I0213 15:54:15.888820 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetState
I0213 15:54:15.888882 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHUsername
I0213 15:54:15.888888 10919 sshutil.go:53] new ssh client: &{IP:192.169.0.44 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18169-2790/.minikube/machines/default-k8s-diff-port-603000/id_rsa Username:docker}
I0213 15:54:15.888954 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0213 15:54:15.889054 10919 sshutil.go:53] new ssh client: &{IP:192.169.0.44 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18169-2790/.minikube/machines/default-k8s-diff-port-603000/id_rsa Username:docker}
I0213 15:54:15.889067 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | hyperkit pid from json: 10930
I0213 15:54:15.890389 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .DriverName
I0213 15:54:15.890554 10919 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
I0213 15:54:15.890563 10919 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0213 15:54:15.890571 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHHostname
I0213 15:54:15.890672 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHPort
I0213 15:54:15.890783 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHKeyPath
I0213 15:54:15.890873 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .GetSSHUsername
I0213 15:54:15.890958 10919 sshutil.go:53] new ssh client: &{IP:192.169.0.44 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18169-2790/.minikube/machines/default-k8s-diff-port-603000/id_rsa Username:docker}
I0213 15:54:15.940615 10919 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0213 15:54:15.940627 10919 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0213 15:54:15.952776 10919 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0213 15:54:15.957539 10919 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0213 15:54:15.963827 10919 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0213 15:54:15.963840 10919 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0213 15:54:15.969589 10919 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0213 15:54:15.969599 10919 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0213 15:54:15.983913 10919 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0213 15:54:15.983926 10919 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0213 15:54:16.014328 10919 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0213 15:54:16.024221 10919 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0213 15:54:16.024241 10919 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0213 15:54:16.062319 10919 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0213 15:54:16.062333 10919 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0213 15:54:16.102936 10919 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0213 15:54:16.102949 10919 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0213 15:54:16.170198 10919 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
I0213 15:54:16.170214 10919 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0213 15:54:16.206233 10919 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0213 15:54:16.206250 10919 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0213 15:54:16.219189 10919 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0213 15:54:16.219203 10919 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0213 15:54:16.231753 10919 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0213 15:54:16.231765 10919 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0213 15:54:16.244274 10919 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0213 15:54:16.244287 10919 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0213 15:54:16.257541 10919 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0213 15:54:17.083242 10919 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.130420758s)
I0213 15:54:17.083267 10919 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.125688135s)
I0213 15:54:17.083283 10919 main.go:141] libmachine: Making call to close driver server
I0213 15:54:17.083289 10919 main.go:141] libmachine: Making call to close driver server
I0213 15:54:17.083293 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .Close
I0213 15:54:17.083296 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .Close
I0213 15:54:17.083449 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | Closing plugin on server side
I0213 15:54:17.083451 10919 main.go:141] libmachine: Successfully made call to close driver server
I0213 15:54:17.083465 10919 main.go:141] libmachine: Making call to close connection to plugin binary
I0213 15:54:17.083472 10919 main.go:141] libmachine: Successfully made call to close driver server
I0213 15:54:17.083474 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | Closing plugin on server side
I0213 15:54:17.083480 10919 main.go:141] libmachine: Making call to close connection to plugin binary
I0213 15:54:17.083485 10919 main.go:141] libmachine: Making call to close driver server
I0213 15:54:17.083491 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .Close
I0213 15:54:17.083474 10919 main.go:141] libmachine: Making call to close driver server
I0213 15:54:17.083508 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .Close
I0213 15:54:17.083602 10919 main.go:141] libmachine: Successfully made call to close driver server
I0213 15:54:17.083614 10919 main.go:141] libmachine: Making call to close connection to plugin binary
I0213 15:54:17.083637 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | Closing plugin on server side
I0213 15:54:17.083663 10919 main.go:141] libmachine: Successfully made call to close driver server
I0213 15:54:17.083675 10919 main.go:141] libmachine: Making call to close connection to plugin binary
I0213 15:54:17.087984 10919 main.go:141] libmachine: Making call to close driver server
I0213 15:54:17.087996 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .Close
I0213 15:54:17.088131 10919 main.go:141] libmachine: Successfully made call to close driver server
I0213 15:54:17.088140 10919 main.go:141] libmachine: Making call to close connection to plugin binary
I0213 15:54:17.088154 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | Closing plugin on server side
I0213 15:54:17.188254 10919 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.173878914s)
I0213 15:54:17.188290 10919 main.go:141] libmachine: Making call to close driver server
I0213 15:54:17.188301 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .Close
I0213 15:54:17.188460 10919 main.go:141] libmachine: Successfully made call to close driver server
I0213 15:54:17.188478 10919 main.go:141] libmachine: Making call to close connection to plugin binary
I0213 15:54:17.188482 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | Closing plugin on server side
I0213 15:54:17.188485 10919 main.go:141] libmachine: Making call to close driver server
I0213 15:54:17.188491 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .Close
I0213 15:54:17.188616 10919 main.go:141] libmachine: Successfully made call to close driver server
I0213 15:54:17.188626 10919 main.go:141] libmachine: Making call to close connection to plugin binary
I0213 15:54:17.188627 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | Closing plugin on server side
I0213 15:54:17.188631 10919 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-603000"
I0213 15:54:17.464988 10919 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.2073895s)
I0213 15:54:17.465014 10919 main.go:141] libmachine: Making call to close driver server
I0213 15:54:17.465023 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .Close
I0213 15:54:17.465164 10919 main.go:141] libmachine: Successfully made call to close driver server
I0213 15:54:17.465174 10919 main.go:141] libmachine: Making call to close connection to plugin binary
I0213 15:54:17.465186 10919 main.go:141] libmachine: Making call to close driver server
I0213 15:54:17.465189 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | Closing plugin on server side
I0213 15:54:17.465197 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) Calling .Close
I0213 15:54:17.465320 10919 main.go:141] libmachine: Successfully made call to close driver server
I0213 15:54:17.465324 10919 main.go:141] libmachine: (default-k8s-diff-port-603000) DBG | Closing plugin on server side
I0213 15:54:17.465332 10919 main.go:141] libmachine: Making call to close connection to plugin binary
I0213 15:54:17.488011 10919 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p default-k8s-diff-port-603000 addons enable metrics-server
I0213 15:54:17.507903 10919 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
I0213 15:54:17.528765 10919 addons.go:505] enable addons completed in 1.905760644s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
I0213 15:54:17.828649 10919 node_ready.go:58] node "default-k8s-diff-port-603000" has status "Ready":"False"
I0213 15:54:20.329717 10919 node_ready.go:58] node "default-k8s-diff-port-603000" has status "Ready":"False"
I0213 15:54:22.827847 10919 node_ready.go:58] node "default-k8s-diff-port-603000" has status "Ready":"False"
I0213 15:54:23.329034 10919 node_ready.go:49] node "default-k8s-diff-port-603000" has status "Ready":"True"
I0213 15:54:23.329050 10919 node_ready.go:38] duration metric: took 7.50372839s waiting for node "default-k8s-diff-port-603000" to be "Ready" ...
I0213 15:54:23.329056 10919 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0213 15:54:23.333040 10919 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-7gs8v" in "kube-system" namespace to be "Ready" ...
I0213 15:54:23.336358 10919 pod_ready.go:92] pod "coredns-5dd5756b68-7gs8v" in "kube-system" namespace has status "Ready":"True"
I0213 15:54:23.336370 10919 pod_ready.go:81] duration metric: took 3.319226ms waiting for pod "coredns-5dd5756b68-7gs8v" in "kube-system" namespace to be "Ready" ...
I0213 15:54:23.336376 10919 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-603000" in "kube-system" namespace to be "Ready" ...
I0213 15:54:23.339676 10919 pod_ready.go:92] pod "etcd-default-k8s-diff-port-603000" in "kube-system" namespace has status "Ready":"True"
I0213 15:54:23.339685 10919 pod_ready.go:81] duration metric: took 3.304162ms waiting for pod "etcd-default-k8s-diff-port-603000" in "kube-system" namespace to be "Ready" ...
I0213 15:54:23.339693 10919 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-603000" in "kube-system" namespace to be "Ready" ...
I0213 15:54:23.843871 10919 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-603000" in "kube-system" namespace has status "Ready":"True"
I0213 15:54:23.843884 10919 pod_ready.go:81] duration metric: took 504.174443ms waiting for pod "kube-apiserver-default-k8s-diff-port-603000" in "kube-system" namespace to be "Ready" ...
I0213 15:54:23.843891 10919 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-603000" in "kube-system" namespace to be "Ready" ...
I0213 15:54:24.349110 10919 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-603000" in "kube-system" namespace has status "Ready":"True"
I0213 15:54:24.349121 10919 pod_ready.go:81] duration metric: took 505.21284ms waiting for pod "kube-controller-manager-default-k8s-diff-port-603000" in "kube-system" namespace to be "Ready" ...
I0213 15:54:24.349128 10919 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jc5nj" in "kube-system" namespace to be "Ready" ...
I0213 15:54:24.529150 10919 pod_ready.go:92] pod "kube-proxy-jc5nj" in "kube-system" namespace has status "Ready":"True"
I0213 15:54:24.529161 10919 pod_ready.go:81] duration metric: took 180.024734ms waiting for pod "kube-proxy-jc5nj" in "kube-system" namespace to be "Ready" ...
I0213 15:54:24.529168 10919 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-603000" in "kube-system" namespace to be "Ready" ...
I0213 15:54:24.930189 10919 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-603000" in "kube-system" namespace has status "Ready":"True"
I0213 15:54:24.930202 10919 pod_ready.go:81] duration metric: took 401.019668ms waiting for pod "kube-scheduler-default-k8s-diff-port-603000" in "kube-system" namespace to be "Ready" ...
I0213 15:54:24.930210 10919 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace to be "Ready" ...
I0213 15:54:26.934373 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:54:28.937088 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:54:31.434441 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:54:33.434828 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:54:35.435928 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:54:37.934233 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:54:39.936850 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:54:41.942239 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:54:44.435037 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:54:46.435335 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:54:48.436335 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:54:50.937052 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:54:53.434345 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:54:55.435291 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:54:57.435408 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:54:59.437129 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:55:01.437537 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:55:03.936448 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:55:05.936846 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:55:08.435085 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:55:10.436600 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:55:12.936058 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:55:14.937171 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:55:16.937379 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:55:19.438525 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:55:21.936955 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:55:23.937692 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:55:26.437707 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:55:28.935735 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:55:30.938252 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:55:33.437927 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:55:35.938283 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:55:38.437034 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:55:40.938568 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:55:42.938781 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:55:45.436520 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:55:47.936962 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:55:49.938211 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:55:52.437955 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:55:54.936982 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:55:56.937610 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:55:59.437019 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:56:01.438858 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:56:03.939085 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:56:05.939174 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:56:08.438390 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:56:10.438596 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:56:12.938288 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:56:15.437699 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:56:17.438655 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:56:19.939529 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:56:22.438448 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:56:24.938078 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:56:26.938424 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:56:29.438825 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:56:31.938848 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:56:34.438829 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:56:36.940308 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:56:38.940570 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:56:41.438413 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:56:43.439358 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:56:45.939566 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:56:48.438035 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:56:50.438868 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:56:52.937991 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:56:54.938071 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:56:56.940029 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:56:59.439639 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:57:01.439740 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:57:03.939308 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:57:06.440152 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:57:08.939567 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:57:10.940126 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:57:13.438241 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:57:15.440273 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:57:17.940474 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:57:20.439696 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:57:22.440752 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:57:24.939272 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:57:27.440465 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:57:29.940259 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:57:31.940555 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:57:34.440001 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:57:36.440431 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:57:38.939597 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:57:40.940254 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:57:42.941037 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:57:45.441545 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:57:47.943329 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:57:50.440837 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:57:52.940784 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:57:55.439840 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:57:57.440032 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:57:59.440965 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:58:01.941402 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:58:04.440297 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:58:06.940053 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:58:08.941389 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:58:11.440895 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:58:13.442387 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:58:15.941513 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:58:18.440183 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:58:20.442291 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:58:22.941441 10919 pod_ready.go:102] pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace has status "Ready":"False"
I0213 15:58:24.941359 10919 pod_ready.go:81] duration metric: took 4m0.004975787s waiting for pod "metrics-server-57f55c9bc5-24wh6" in "kube-system" namespace to be "Ready" ...
E0213 15:58:24.941373 10919 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
I0213 15:58:24.941378 10919 pod_ready.go:38] duration metric: took 4m1.606113814s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0213 15:58:24.941391 10919 api_server.go:52] waiting for apiserver process to appear ...
I0213 15:58:24.941476 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0213 15:58:24.955751 10919 logs.go:276] 2 containers: [67b5fd979937 69aac7de960e]
I0213 15:58:24.955830 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0213 15:58:24.970231 10919 logs.go:276] 2 containers: [8e7930e1af27 33e10bdbe674]
I0213 15:58:24.970309 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0213 15:58:24.984160 10919 logs.go:276] 2 containers: [dde67a3d6462 cf10ab2a5821]
I0213 15:58:24.984241 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0213 15:58:24.998206 10919 logs.go:276] 2 containers: [4a32e0095c50 7516f7e0a7b3]
I0213 15:58:24.998281 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0213 15:58:25.011700 10919 logs.go:276] 2 containers: [3a2c33446232 166517917ebe]
I0213 15:58:25.011778 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0213 15:58:25.026061 10919 logs.go:276] 2 containers: [b2bc8bfc1796 f753753d8cc1]
I0213 15:58:25.026137 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0213 15:58:25.040315 10919 logs.go:276] 0 containers: []
W0213 15:58:25.040327 10919 logs.go:278] No container was found matching "kindnet"
I0213 15:58:25.040392 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0213 15:58:25.054674 10919 logs.go:276] 1 containers: [9ad73b9e5c82]
I0213 15:58:25.054757 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0213 15:58:25.068946 10919 logs.go:276] 2 containers: [35631bb46ae0 d2fb13268178]
I0213 15:58:25.068967 10919 logs.go:123] Gathering logs for kube-apiserver [69aac7de960e] ...
I0213 15:58:25.068975 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69aac7de960e"
I0213 15:58:25.105517 10919 logs.go:123] Gathering logs for etcd [33e10bdbe674] ...
I0213 15:58:25.105539 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33e10bdbe674"
I0213 15:58:25.126748 10919 logs.go:123] Gathering logs for storage-provisioner [d2fb13268178] ...
I0213 15:58:25.126764 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2fb13268178"
I0213 15:58:25.145818 10919 logs.go:123] Gathering logs for kube-proxy [166517917ebe] ...
I0213 15:58:25.145833 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 166517917ebe"
I0213 15:58:25.162222 10919 logs.go:123] Gathering logs for kube-controller-manager [b2bc8bfc1796] ...
I0213 15:58:25.162237 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2bc8bfc1796"
I0213 15:58:25.189824 10919 logs.go:123] Gathering logs for kube-controller-manager [f753753d8cc1] ...
I0213 15:58:25.189842 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f753753d8cc1"
I0213 15:58:25.215933 10919 logs.go:123] Gathering logs for describe nodes ...
I0213 15:58:25.215951 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0213 15:58:25.306084 10919 logs.go:123] Gathering logs for kube-scheduler [4a32e0095c50] ...
I0213 15:58:25.306101 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a32e0095c50"
I0213 15:58:25.322374 10919 logs.go:123] Gathering logs for kube-proxy [3a2c33446232] ...
I0213 15:58:25.322389 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a2c33446232"
I0213 15:58:25.339176 10919 logs.go:123] Gathering logs for storage-provisioner [35631bb46ae0] ...
I0213 15:58:25.339191 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35631bb46ae0"
I0213 15:58:25.355043 10919 logs.go:123] Gathering logs for Docker ...
I0213 15:58:25.355062 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0213 15:58:25.395135 10919 logs.go:123] Gathering logs for kube-apiserver [67b5fd979937] ...
I0213 15:58:25.395156 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b5fd979937"
I0213 15:58:25.419539 10919 logs.go:123] Gathering logs for coredns [dde67a3d6462] ...
I0213 15:58:25.419557 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dde67a3d6462"
I0213 15:58:25.435322 10919 logs.go:123] Gathering logs for kubernetes-dashboard [9ad73b9e5c82] ...
I0213 15:58:25.435341 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ad73b9e5c82"
I0213 15:58:25.461447 10919 logs.go:123] Gathering logs for coredns [cf10ab2a5821] ...
I0213 15:58:25.461462 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10ab2a5821"
I0213 15:58:25.478674 10919 logs.go:123] Gathering logs for kube-scheduler [7516f7e0a7b3] ...
I0213 15:58:25.478689 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7516f7e0a7b3"
I0213 15:58:25.499537 10919 logs.go:123] Gathering logs for container status ...
I0213 15:58:25.499552 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0213 15:58:25.553665 10919 logs.go:123] Gathering logs for kubelet ...
I0213 15:58:25.553681 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0213 15:58:25.599893 10919 logs.go:123] Gathering logs for dmesg ...
I0213 15:58:25.599913 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0213 15:58:25.610663 10919 logs.go:123] Gathering logs for etcd [8e7930e1af27] ...
I0213 15:58:25.610678 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7930e1af27"
I0213 15:58:28.134030 10919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0213 15:58:28.143688 10919 api_server.go:72] duration metric: took 4m12.50094823s to wait for apiserver process to appear ...
I0213 15:58:28.143699 10919 api_server.go:88] waiting for apiserver healthz status ...
I0213 15:58:28.143772 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0213 15:58:28.157962 10919 logs.go:276] 2 containers: [67b5fd979937 69aac7de960e]
I0213 15:58:28.158038 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0213 15:58:28.170483 10919 logs.go:276] 2 containers: [8e7930e1af27 33e10bdbe674]
I0213 15:58:28.170557 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0213 15:58:28.183578 10919 logs.go:276] 2 containers: [dde67a3d6462 cf10ab2a5821]
I0213 15:58:28.183651 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0213 15:58:28.196627 10919 logs.go:276] 2 containers: [4a32e0095c50 7516f7e0a7b3]
I0213 15:58:28.196702 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0213 15:58:28.209245 10919 logs.go:276] 2 containers: [3a2c33446232 166517917ebe]
I0213 15:58:28.209328 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0213 15:58:28.223121 10919 logs.go:276] 2 containers: [b2bc8bfc1796 f753753d8cc1]
I0213 15:58:28.223195 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0213 15:58:28.235823 10919 logs.go:276] 0 containers: []
W0213 15:58:28.235836 10919 logs.go:278] No container was found matching "kindnet"
I0213 15:58:28.235902 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0213 15:58:28.249156 10919 logs.go:276] 1 containers: [9ad73b9e5c82]
I0213 15:58:28.249228 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0213 15:58:28.262218 10919 logs.go:276] 2 containers: [35631bb46ae0 d2fb13268178]
I0213 15:58:28.262237 10919 logs.go:123] Gathering logs for kube-apiserver [69aac7de960e] ...
I0213 15:58:28.262243 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69aac7de960e"
I0213 15:58:28.296760 10919 logs.go:123] Gathering logs for coredns [dde67a3d6462] ...
I0213 15:58:28.296774 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dde67a3d6462"
I0213 15:58:28.311479 10919 logs.go:123] Gathering logs for kube-scheduler [4a32e0095c50] ...
I0213 15:58:28.311494 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a32e0095c50"
I0213 15:58:28.327525 10919 logs.go:123] Gathering logs for kube-controller-manager [f753753d8cc1] ...
I0213 15:58:28.327539 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f753753d8cc1"
I0213 15:58:28.353671 10919 logs.go:123] Gathering logs for storage-provisioner [35631bb46ae0] ...
I0213 15:58:28.353693 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35631bb46ae0"
I0213 15:58:28.369809 10919 logs.go:123] Gathering logs for kubelet ...
I0213 15:58:28.369824 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0213 15:58:28.414836 10919 logs.go:123] Gathering logs for kubernetes-dashboard [9ad73b9e5c82] ...
I0213 15:58:28.414852 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ad73b9e5c82"
I0213 15:58:28.430836 10919 logs.go:123] Gathering logs for container status ...
I0213 15:58:28.430850 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0213 15:58:28.478461 10919 logs.go:123] Gathering logs for kube-proxy [166517917ebe] ...
I0213 15:58:28.478475 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 166517917ebe"
I0213 15:58:28.494126 10919 logs.go:123] Gathering logs for describe nodes ...
I0213 15:58:28.494144 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0213 15:58:28.604223 10919 logs.go:123] Gathering logs for kube-apiserver [67b5fd979937] ...
I0213 15:58:28.604239 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b5fd979937"
I0213 15:58:28.631249 10919 logs.go:123] Gathering logs for etcd [8e7930e1af27] ...
I0213 15:58:28.631265 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7930e1af27"
I0213 15:58:28.650821 10919 logs.go:123] Gathering logs for etcd [33e10bdbe674] ...
I0213 15:58:28.650838 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33e10bdbe674"
I0213 15:58:28.670437 10919 logs.go:123] Gathering logs for kube-proxy [3a2c33446232] ...
I0213 15:58:28.670452 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a2c33446232"
I0213 15:58:28.685985 10919 logs.go:123] Gathering logs for storage-provisioner [d2fb13268178] ...
I0213 15:58:28.686001 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2fb13268178"
I0213 15:58:28.700560 10919 logs.go:123] Gathering logs for Docker ...
I0213 15:58:28.700574 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0213 15:58:28.738398 10919 logs.go:123] Gathering logs for dmesg ...
I0213 15:58:28.738414 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0213 15:58:28.748710 10919 logs.go:123] Gathering logs for kube-scheduler [7516f7e0a7b3] ...
I0213 15:58:28.748723 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7516f7e0a7b3"
I0213 15:58:28.769266 10919 logs.go:123] Gathering logs for kube-controller-manager [b2bc8bfc1796] ...
I0213 15:58:28.769279 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2bc8bfc1796"
I0213 15:58:28.795715 10919 logs.go:123] Gathering logs for coredns [cf10ab2a5821] ...
I0213 15:58:28.795729 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10ab2a5821"
I0213 15:58:31.311325 10919 api_server.go:253] Checking apiserver healthz at https://192.169.0.44:8444/healthz ...
I0213 15:58:36.311918 10919 api_server.go:269] stopped: https://192.169.0.44:8444/healthz: Get "https://192.169.0.44:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0213 15:58:36.312054 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0213 15:58:36.326126 10919 logs.go:276] 2 containers: [67b5fd979937 69aac7de960e]
I0213 15:58:36.326202 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0213 15:58:36.340319 10919 logs.go:276] 2 containers: [8e7930e1af27 33e10bdbe674]
I0213 15:58:36.340395 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0213 15:58:36.353665 10919 logs.go:276] 2 containers: [dde67a3d6462 cf10ab2a5821]
I0213 15:58:36.353744 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0213 15:58:36.366256 10919 logs.go:276] 2 containers: [4a32e0095c50 7516f7e0a7b3]
I0213 15:58:36.366340 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0213 15:58:36.380162 10919 logs.go:276] 2 containers: [3a2c33446232 166517917ebe]
I0213 15:58:36.380235 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0213 15:58:36.393512 10919 logs.go:276] 2 containers: [b2bc8bfc1796 f753753d8cc1]
I0213 15:58:36.393590 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0213 15:58:36.406502 10919 logs.go:276] 0 containers: []
W0213 15:58:36.406517 10919 logs.go:278] No container was found matching "kindnet"
I0213 15:58:36.406583 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0213 15:58:36.420128 10919 logs.go:276] 2 containers: [35631bb46ae0 d2fb13268178]
I0213 15:58:36.420206 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0213 15:58:36.434034 10919 logs.go:276] 1 containers: [9ad73b9e5c82]
I0213 15:58:36.434052 10919 logs.go:123] Gathering logs for kube-apiserver [69aac7de960e] ...
I0213 15:58:36.434059 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69aac7de960e"
I0213 15:58:36.472160 10919 logs.go:123] Gathering logs for coredns [dde67a3d6462] ...
I0213 15:58:36.472179 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dde67a3d6462"
I0213 15:58:36.488346 10919 logs.go:123] Gathering logs for kube-scheduler [7516f7e0a7b3] ...
I0213 15:58:36.488363 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7516f7e0a7b3"
I0213 15:58:36.513910 10919 logs.go:123] Gathering logs for kubernetes-dashboard [9ad73b9e5c82] ...
I0213 15:58:36.513926 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ad73b9e5c82"
I0213 15:58:36.530596 10919 logs.go:123] Gathering logs for etcd [8e7930e1af27] ...
I0213 15:58:36.530610 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7930e1af27"
I0213 15:58:36.551985 10919 logs.go:123] Gathering logs for kube-scheduler [4a32e0095c50] ...
I0213 15:58:36.551998 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a32e0095c50"
I0213 15:58:36.567504 10919 logs.go:123] Gathering logs for kube-proxy [166517917ebe] ...
I0213 15:58:36.567518 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 166517917ebe"
I0213 15:58:36.582808 10919 logs.go:123] Gathering logs for kube-controller-manager [f753753d8cc1] ...
I0213 15:58:36.582825 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f753753d8cc1"
I0213 15:58:36.607200 10919 logs.go:123] Gathering logs for kube-controller-manager [b2bc8bfc1796] ...
I0213 15:58:36.607214 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2bc8bfc1796"
I0213 15:58:36.637664 10919 logs.go:123] Gathering logs for storage-provisioner [35631bb46ae0] ...
I0213 15:58:36.637678 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35631bb46ae0"
I0213 15:58:36.653824 10919 logs.go:123] Gathering logs for storage-provisioner [d2fb13268178] ...
I0213 15:58:36.653838 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2fb13268178"
I0213 15:58:36.669497 10919 logs.go:123] Gathering logs for dmesg ...
I0213 15:58:36.669511 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0213 15:58:36.683494 10919 logs.go:123] Gathering logs for describe nodes ...
I0213 15:58:36.683509 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0213 15:58:36.764702 10919 logs.go:123] Gathering logs for kube-apiserver [67b5fd979937] ...
I0213 15:58:36.764717 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b5fd979937"
I0213 15:58:36.789760 10919 logs.go:123] Gathering logs for etcd [33e10bdbe674] ...
I0213 15:58:36.789775 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33e10bdbe674"
I0213 15:58:36.810124 10919 logs.go:123] Gathering logs for coredns [cf10ab2a5821] ...
I0213 15:58:36.810137 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10ab2a5821"
I0213 15:58:36.824950 10919 logs.go:123] Gathering logs for Docker ...
I0213 15:58:36.824967 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0213 15:58:36.863409 10919 logs.go:123] Gathering logs for container status ...
I0213 15:58:36.863424 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0213 15:58:36.909551 10919 logs.go:123] Gathering logs for kubelet ...
I0213 15:58:36.909566 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0213 15:58:36.953701 10919 logs.go:123] Gathering logs for kube-proxy [3a2c33446232] ...
I0213 15:58:36.953717 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a2c33446232"
I0213 15:58:39.469147 10919 api_server.go:253] Checking apiserver healthz at https://192.169.0.44:8444/healthz ...
I0213 15:58:44.470805 10919 api_server.go:269] stopped: https://192.169.0.44:8444/healthz: Get "https://192.169.0.44:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0213 15:58:44.470918 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0213 15:58:44.484829 10919 logs.go:276] 2 containers: [67b5fd979937 69aac7de960e]
I0213 15:58:44.484909 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0213 15:58:44.498572 10919 logs.go:276] 2 containers: [8e7930e1af27 33e10bdbe674]
I0213 15:58:44.498647 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0213 15:58:44.511935 10919 logs.go:276] 2 containers: [dde67a3d6462 cf10ab2a5821]
I0213 15:58:44.512014 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0213 15:58:44.525419 10919 logs.go:276] 2 containers: [4a32e0095c50 7516f7e0a7b3]
I0213 15:58:44.525494 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0213 15:58:44.539485 10919 logs.go:276] 2 containers: [3a2c33446232 166517917ebe]
I0213 15:58:44.539562 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0213 15:58:44.553302 10919 logs.go:276] 2 containers: [b2bc8bfc1796 f753753d8cc1]
I0213 15:58:44.553376 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0213 15:58:44.566445 10919 logs.go:276] 0 containers: []
W0213 15:58:44.566458 10919 logs.go:278] No container was found matching "kindnet"
I0213 15:58:44.566519 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0213 15:58:44.579632 10919 logs.go:276] 1 containers: [9ad73b9e5c82]
I0213 15:58:44.579705 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0213 15:58:44.592578 10919 logs.go:276] 2 containers: [35631bb46ae0 d2fb13268178]
I0213 15:58:44.592595 10919 logs.go:123] Gathering logs for kube-apiserver [67b5fd979937] ...
I0213 15:58:44.592603 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b5fd979937"
I0213 15:58:44.614429 10919 logs.go:123] Gathering logs for coredns [dde67a3d6462] ...
I0213 15:58:44.614443 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dde67a3d6462"
I0213 15:58:44.629171 10919 logs.go:123] Gathering logs for coredns [cf10ab2a5821] ...
I0213 15:58:44.629186 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10ab2a5821"
I0213 15:58:44.643591 10919 logs.go:123] Gathering logs for kube-scheduler [7516f7e0a7b3] ...
I0213 15:58:44.643605 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7516f7e0a7b3"
I0213 15:58:44.662780 10919 logs.go:123] Gathering logs for kube-proxy [166517917ebe] ...
I0213 15:58:44.662793 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 166517917ebe"
I0213 15:58:44.677555 10919 logs.go:123] Gathering logs for kube-controller-manager [b2bc8bfc1796] ...
I0213 15:58:44.677570 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2bc8bfc1796"
I0213 15:58:44.703854 10919 logs.go:123] Gathering logs for kubernetes-dashboard [9ad73b9e5c82] ...
I0213 15:58:44.703868 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ad73b9e5c82"
I0213 15:58:44.721104 10919 logs.go:123] Gathering logs for container status ...
I0213 15:58:44.721118 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0213 15:58:44.778025 10919 logs.go:123] Gathering logs for kubelet ...
I0213 15:58:44.778040 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0213 15:58:44.822663 10919 logs.go:123] Gathering logs for describe nodes ...
I0213 15:58:44.822678 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0213 15:58:44.907422 10919 logs.go:123] Gathering logs for etcd [8e7930e1af27] ...
I0213 15:58:44.907438 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7930e1af27"
I0213 15:58:44.928336 10919 logs.go:123] Gathering logs for etcd [33e10bdbe674] ...
I0213 15:58:44.928351 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33e10bdbe674"
I0213 15:58:44.947877 10919 logs.go:123] Gathering logs for storage-provisioner [35631bb46ae0] ...
I0213 15:58:44.947890 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35631bb46ae0"
I0213 15:58:44.963677 10919 logs.go:123] Gathering logs for dmesg ...
I0213 15:58:44.963691 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0213 15:58:44.972589 10919 logs.go:123] Gathering logs for kube-apiserver [69aac7de960e] ...
I0213 15:58:44.972601 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69aac7de960e"
I0213 15:58:45.007401 10919 logs.go:123] Gathering logs for kube-scheduler [4a32e0095c50] ...
I0213 15:58:45.007416 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a32e0095c50"
I0213 15:58:45.022960 10919 logs.go:123] Gathering logs for kube-proxy [3a2c33446232] ...
I0213 15:58:45.022975 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a2c33446232"
I0213 15:58:45.038517 10919 logs.go:123] Gathering logs for kube-controller-manager [f753753d8cc1] ...
I0213 15:58:45.038532 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f753753d8cc1"
I0213 15:58:45.063302 10919 logs.go:123] Gathering logs for storage-provisioner [d2fb13268178] ...
I0213 15:58:45.063315 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2fb13268178"
I0213 15:58:45.078260 10919 logs.go:123] Gathering logs for Docker ...
I0213 15:58:45.078275 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0213 15:58:47.616583 10919 api_server.go:253] Checking apiserver healthz at https://192.169.0.44:8444/healthz ...
I0213 15:58:52.617986 10919 api_server.go:269] stopped: https://192.169.0.44:8444/healthz: Get "https://192.169.0.44:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0213 15:58:52.618188 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0213 15:58:52.632946 10919 logs.go:276] 2 containers: [67b5fd979937 69aac7de960e]
I0213 15:58:52.633028 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0213 15:58:52.646638 10919 logs.go:276] 2 containers: [8e7930e1af27 33e10bdbe674]
I0213 15:58:52.646716 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0213 15:58:52.659674 10919 logs.go:276] 2 containers: [dde67a3d6462 cf10ab2a5821]
I0213 15:58:52.659750 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0213 15:58:52.672697 10919 logs.go:276] 2 containers: [4a32e0095c50 7516f7e0a7b3]
I0213 15:58:52.672768 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0213 15:58:52.686194 10919 logs.go:276] 2 containers: [3a2c33446232 166517917ebe]
I0213 15:58:52.686292 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0213 15:58:52.700472 10919 logs.go:276] 2 containers: [b2bc8bfc1796 f753753d8cc1]
I0213 15:58:52.700547 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0213 15:58:52.713302 10919 logs.go:276] 0 containers: []
W0213 15:58:52.713314 10919 logs.go:278] No container was found matching "kindnet"
I0213 15:58:52.713373 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0213 15:58:52.727095 10919 logs.go:276] 1 containers: [9ad73b9e5c82]
I0213 15:58:52.727173 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0213 15:58:52.742643 10919 logs.go:276] 2 containers: [35631bb46ae0 d2fb13268178]
I0213 15:58:52.742660 10919 logs.go:123] Gathering logs for kube-apiserver [69aac7de960e] ...
I0213 15:58:52.742666 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69aac7de960e"
I0213 15:58:52.777302 10919 logs.go:123] Gathering logs for coredns [cf10ab2a5821] ...
I0213 15:58:52.777319 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10ab2a5821"
I0213 15:58:52.792386 10919 logs.go:123] Gathering logs for storage-provisioner [35631bb46ae0] ...
I0213 15:58:52.792400 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35631bb46ae0"
I0213 15:58:52.809892 10919 logs.go:123] Gathering logs for storage-provisioner [d2fb13268178] ...
I0213 15:58:52.809906 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2fb13268178"
I0213 15:58:52.824877 10919 logs.go:123] Gathering logs for Docker ...
I0213 15:58:52.824892 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0213 15:58:52.863902 10919 logs.go:123] Gathering logs for etcd [33e10bdbe674] ...
I0213 15:58:52.863919 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33e10bdbe674"
I0213 15:58:52.883450 10919 logs.go:123] Gathering logs for kube-scheduler [4a32e0095c50] ...
I0213 15:58:52.883464 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a32e0095c50"
I0213 15:58:52.899266 10919 logs.go:123] Gathering logs for kube-scheduler [7516f7e0a7b3] ...
I0213 15:58:52.899280 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7516f7e0a7b3"
I0213 15:58:52.919463 10919 logs.go:123] Gathering logs for dmesg ...
I0213 15:58:52.919477 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0213 15:58:52.928478 10919 logs.go:123] Gathering logs for describe nodes ...
I0213 15:58:52.928490 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0213 15:58:53.008640 10919 logs.go:123] Gathering logs for etcd [8e7930e1af27] ...
I0213 15:58:53.008655 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7930e1af27"
I0213 15:58:53.027477 10919 logs.go:123] Gathering logs for kube-proxy [3a2c33446232] ...
I0213 15:58:53.027492 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a2c33446232"
I0213 15:58:53.043267 10919 logs.go:123] Gathering logs for kube-proxy [166517917ebe] ...
I0213 15:58:53.043283 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 166517917ebe"
I0213 15:58:53.058954 10919 logs.go:123] Gathering logs for kube-controller-manager [f753753d8cc1] ...
I0213 15:58:53.058968 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f753753d8cc1"
I0213 15:58:53.083822 10919 logs.go:123] Gathering logs for container status ...
I0213 15:58:53.083837 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0213 15:58:53.132917 10919 logs.go:123] Gathering logs for kubelet ...
I0213 15:58:53.132932 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0213 15:58:53.180247 10919 logs.go:123] Gathering logs for kube-apiserver [67b5fd979937] ...
I0213 15:58:53.180265 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b5fd979937"
I0213 15:58:53.201998 10919 logs.go:123] Gathering logs for coredns [dde67a3d6462] ...
I0213 15:58:53.202011 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dde67a3d6462"
I0213 15:58:53.216567 10919 logs.go:123] Gathering logs for kube-controller-manager [b2bc8bfc1796] ...
I0213 15:58:53.216580 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2bc8bfc1796"
I0213 15:58:53.251793 10919 logs.go:123] Gathering logs for kubernetes-dashboard [9ad73b9e5c82] ...
I0213 15:58:53.251810 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ad73b9e5c82"
I0213 15:58:55.773514 10919 api_server.go:253] Checking apiserver healthz at https://192.169.0.44:8444/healthz ...
I0213 15:59:00.774893 10919 api_server.go:269] stopped: https://192.169.0.44:8444/healthz: Get "https://192.169.0.44:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0213 15:59:00.775082 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0213 15:59:00.789747 10919 logs.go:276] 2 containers: [67b5fd979937 69aac7de960e]
I0213 15:59:00.789821 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0213 15:59:00.803604 10919 logs.go:276] 2 containers: [8e7930e1af27 33e10bdbe674]
I0213 15:59:00.803677 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0213 15:59:00.817037 10919 logs.go:276] 2 containers: [dde67a3d6462 cf10ab2a5821]
I0213 15:59:00.817112 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0213 15:59:00.830762 10919 logs.go:276] 2 containers: [4a32e0095c50 7516f7e0a7b3]
I0213 15:59:00.830837 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0213 15:59:00.844945 10919 logs.go:276] 2 containers: [3a2c33446232 166517917ebe]
I0213 15:59:00.845020 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0213 15:59:00.868423 10919 logs.go:276] 2 containers: [b2bc8bfc1796 f753753d8cc1]
I0213 15:59:00.868502 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0213 15:59:00.881896 10919 logs.go:276] 0 containers: []
W0213 15:59:00.881910 10919 logs.go:278] No container was found matching "kindnet"
I0213 15:59:00.881971 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0213 15:59:00.895790 10919 logs.go:276] 1 containers: [9ad73b9e5c82]
I0213 15:59:00.895866 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0213 15:59:00.912210 10919 logs.go:276] 2 containers: [35631bb46ae0 d2fb13268178]
I0213 15:59:00.912228 10919 logs.go:123] Gathering logs for Docker ...
I0213 15:59:00.912235 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0213 15:59:00.952018 10919 logs.go:123] Gathering logs for kube-apiserver [69aac7de960e] ...
I0213 15:59:00.952035 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69aac7de960e"
I0213 15:59:00.986394 10919 logs.go:123] Gathering logs for coredns [dde67a3d6462] ...
I0213 15:59:00.986409 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dde67a3d6462"
I0213 15:59:01.000547 10919 logs.go:123] Gathering logs for storage-provisioner [d2fb13268178] ...
I0213 15:59:01.000562 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2fb13268178"
I0213 15:59:01.014604 10919 logs.go:123] Gathering logs for kube-scheduler [4a32e0095c50] ...
I0213 15:59:01.014619 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a32e0095c50"
I0213 15:59:01.030007 10919 logs.go:123] Gathering logs for kube-proxy [3a2c33446232] ...
I0213 15:59:01.030021 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a2c33446232"
I0213 15:59:01.045648 10919 logs.go:123] Gathering logs for storage-provisioner [35631bb46ae0] ...
I0213 15:59:01.045663 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35631bb46ae0"
I0213 15:59:01.061252 10919 logs.go:123] Gathering logs for dmesg ...
I0213 15:59:01.061266 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0213 15:59:01.070132 10919 logs.go:123] Gathering logs for describe nodes ...
I0213 15:59:01.070144 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0213 15:59:01.165950 10919 logs.go:123] Gathering logs for etcd [33e10bdbe674] ...
I0213 15:59:01.165965 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33e10bdbe674"
I0213 15:59:01.189824 10919 logs.go:123] Gathering logs for kubelet ...
I0213 15:59:01.189837 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0213 15:59:01.236382 10919 logs.go:123] Gathering logs for etcd [8e7930e1af27] ...
I0213 15:59:01.236397 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7930e1af27"
I0213 15:59:01.256182 10919 logs.go:123] Gathering logs for coredns [cf10ab2a5821] ...
I0213 15:59:01.256198 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10ab2a5821"
I0213 15:59:01.271570 10919 logs.go:123] Gathering logs for kube-controller-manager [b2bc8bfc1796] ...
I0213 15:59:01.271583 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2bc8bfc1796"
I0213 15:59:01.302769 10919 logs.go:123] Gathering logs for kube-controller-manager [f753753d8cc1] ...
I0213 15:59:01.302782 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f753753d8cc1"
I0213 15:59:01.327932 10919 logs.go:123] Gathering logs for kubernetes-dashboard [9ad73b9e5c82] ...
I0213 15:59:01.327946 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ad73b9e5c82"
I0213 15:59:01.344217 10919 logs.go:123] Gathering logs for container status ...
I0213 15:59:01.344231 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0213 15:59:01.397525 10919 logs.go:123] Gathering logs for kube-apiserver [67b5fd979937] ...
I0213 15:59:01.397543 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b5fd979937"
I0213 15:59:01.421046 10919 logs.go:123] Gathering logs for kube-scheduler [7516f7e0a7b3] ...
I0213 15:59:01.421060 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7516f7e0a7b3"
I0213 15:59:01.441682 10919 logs.go:123] Gathering logs for kube-proxy [166517917ebe] ...
I0213 15:59:01.441696 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 166517917ebe"
I0213 15:59:03.959343 10919 api_server.go:253] Checking apiserver healthz at https://192.169.0.44:8444/healthz ...
I0213 15:59:08.960695 10919 api_server.go:269] stopped: https://192.169.0.44:8444/healthz: Get "https://192.169.0.44:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0213 15:59:08.960915 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0213 15:59:08.977278 10919 logs.go:276] 2 containers: [67b5fd979937 69aac7de960e]
I0213 15:59:08.977352 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0213 15:59:08.990556 10919 logs.go:276] 2 containers: [8e7930e1af27 33e10bdbe674]
I0213 15:59:08.990629 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0213 15:59:09.004400 10919 logs.go:276] 2 containers: [dde67a3d6462 cf10ab2a5821]
I0213 15:59:09.004475 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0213 15:59:09.018079 10919 logs.go:276] 2 containers: [4a32e0095c50 7516f7e0a7b3]
I0213 15:59:09.018153 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0213 15:59:09.032172 10919 logs.go:276] 2 containers: [3a2c33446232 166517917ebe]
I0213 15:59:09.032252 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0213 15:59:09.045875 10919 logs.go:276] 2 containers: [b2bc8bfc1796 f753753d8cc1]
I0213 15:59:09.045946 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0213 15:59:09.062239 10919 logs.go:276] 0 containers: []
W0213 15:59:09.062252 10919 logs.go:278] No container was found matching "kindnet"
I0213 15:59:09.062320 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0213 15:59:09.075527 10919 logs.go:276] 2 containers: [35631bb46ae0 d2fb13268178]
I0213 15:59:09.075605 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0213 15:59:09.089532 10919 logs.go:276] 1 containers: [9ad73b9e5c82]
I0213 15:59:09.089550 10919 logs.go:123] Gathering logs for etcd [33e10bdbe674] ...
I0213 15:59:09.089560 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33e10bdbe674"
I0213 15:59:09.113724 10919 logs.go:123] Gathering logs for kube-controller-manager [b2bc8bfc1796] ...
I0213 15:59:09.113737 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2bc8bfc1796"
I0213 15:59:09.141679 10919 logs.go:123] Gathering logs for storage-provisioner [35631bb46ae0] ...
I0213 15:59:09.141692 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35631bb46ae0"
I0213 15:59:09.156088 10919 logs.go:123] Gathering logs for describe nodes ...
I0213 15:59:09.156102 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0213 15:59:09.240416 10919 logs.go:123] Gathering logs for kube-apiserver [67b5fd979937] ...
I0213 15:59:09.240430 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b5fd979937"
I0213 15:59:09.261440 10919 logs.go:123] Gathering logs for kube-apiserver [69aac7de960e] ...
I0213 15:59:09.261454 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69aac7de960e"
I0213 15:59:09.294410 10919 logs.go:123] Gathering logs for kube-proxy [3a2c33446232] ...
I0213 15:59:09.294425 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a2c33446232"
I0213 15:59:09.315250 10919 logs.go:123] Gathering logs for kube-proxy [166517917ebe] ...
I0213 15:59:09.315264 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 166517917ebe"
I0213 15:59:09.330639 10919 logs.go:123] Gathering logs for kubernetes-dashboard [9ad73b9e5c82] ...
I0213 15:59:09.330652 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ad73b9e5c82"
I0213 15:59:09.347260 10919 logs.go:123] Gathering logs for Docker ...
I0213 15:59:09.347274 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0213 15:59:09.385978 10919 logs.go:123] Gathering logs for dmesg ...
I0213 15:59:09.385993 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0213 15:59:09.395338 10919 logs.go:123] Gathering logs for kube-scheduler [4a32e0095c50] ...
I0213 15:59:09.395351 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a32e0095c50"
I0213 15:59:09.410639 10919 logs.go:123] Gathering logs for kube-scheduler [7516f7e0a7b3] ...
I0213 15:59:09.410653 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7516f7e0a7b3"
I0213 15:59:09.430825 10919 logs.go:123] Gathering logs for etcd [8e7930e1af27] ...
I0213 15:59:09.430839 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7930e1af27"
I0213 15:59:09.451297 10919 logs.go:123] Gathering logs for coredns [cf10ab2a5821] ...
I0213 15:59:09.451310 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10ab2a5821"
I0213 15:59:09.466343 10919 logs.go:123] Gathering logs for storage-provisioner [d2fb13268178] ...
I0213 15:59:09.466357 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2fb13268178"
I0213 15:59:09.481218 10919 logs.go:123] Gathering logs for container status ...
I0213 15:59:09.481232 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0213 15:59:09.530434 10919 logs.go:123] Gathering logs for kubelet ...
I0213 15:59:09.530448 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0213 15:59:09.579432 10919 logs.go:123] Gathering logs for coredns [dde67a3d6462] ...
I0213 15:59:09.579448 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dde67a3d6462"
I0213 15:59:09.594401 10919 logs.go:123] Gathering logs for kube-controller-manager [f753753d8cc1] ...
I0213 15:59:09.594415 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f753753d8cc1"
I0213 15:59:12.120513 10919 api_server.go:253] Checking apiserver healthz at https://192.169.0.44:8444/healthz ...
I0213 15:59:17.121483 10919 api_server.go:269] stopped: https://192.169.0.44:8444/healthz: Get "https://192.169.0.44:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0213 15:59:17.121632 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0213 15:59:17.137144 10919 logs.go:276] 2 containers: [67b5fd979937 69aac7de960e]
I0213 15:59:17.137219 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0213 15:59:17.152021 10919 logs.go:276] 2 containers: [8e7930e1af27 33e10bdbe674]
I0213 15:59:17.152103 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0213 15:59:17.166098 10919 logs.go:276] 2 containers: [dde67a3d6462 cf10ab2a5821]
I0213 15:59:17.166181 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0213 15:59:17.179857 10919 logs.go:276] 2 containers: [4a32e0095c50 7516f7e0a7b3]
I0213 15:59:17.179935 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0213 15:59:17.193467 10919 logs.go:276] 2 containers: [3a2c33446232 166517917ebe]
I0213 15:59:17.193552 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0213 15:59:17.206927 10919 logs.go:276] 2 containers: [b2bc8bfc1796 f753753d8cc1]
I0213 15:59:17.207000 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0213 15:59:17.220182 10919 logs.go:276] 0 containers: []
W0213 15:59:17.220199 10919 logs.go:278] No container was found matching "kindnet"
I0213 15:59:17.220269 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0213 15:59:17.233560 10919 logs.go:276] 2 containers: [35631bb46ae0 d2fb13268178]
I0213 15:59:17.233650 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0213 15:59:17.247755 10919 logs.go:276] 1 containers: [9ad73b9e5c82]
I0213 15:59:17.247772 10919 logs.go:123] Gathering logs for kube-proxy [166517917ebe] ...
I0213 15:59:17.247780 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 166517917ebe"
I0213 15:59:17.264129 10919 logs.go:123] Gathering logs for storage-provisioner [35631bb46ae0] ...
I0213 15:59:17.264143 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35631bb46ae0"
I0213 15:59:17.279330 10919 logs.go:123] Gathering logs for describe nodes ...
I0213 15:59:17.279344 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0213 15:59:17.366117 10919 logs.go:123] Gathering logs for kube-apiserver [67b5fd979937] ...
I0213 15:59:17.366132 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b5fd979937"
I0213 15:59:17.390668 10919 logs.go:123] Gathering logs for etcd [33e10bdbe674] ...
I0213 15:59:17.390682 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33e10bdbe674"
I0213 15:59:17.409796 10919 logs.go:123] Gathering logs for coredns [cf10ab2a5821] ...
I0213 15:59:17.409811 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10ab2a5821"
I0213 15:59:17.425104 10919 logs.go:123] Gathering logs for etcd [8e7930e1af27] ...
I0213 15:59:17.425118 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7930e1af27"
I0213 15:59:17.445112 10919 logs.go:123] Gathering logs for kube-proxy [3a2c33446232] ...
I0213 15:59:17.445127 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a2c33446232"
I0213 15:59:17.461454 10919 logs.go:123] Gathering logs for kube-controller-manager [f753753d8cc1] ...
I0213 15:59:17.461469 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f753753d8cc1"
I0213 15:59:17.487113 10919 logs.go:123] Gathering logs for container status ...
I0213 15:59:17.487127 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0213 15:59:17.542596 10919 logs.go:123] Gathering logs for kube-scheduler [7516f7e0a7b3] ...
I0213 15:59:17.542614 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7516f7e0a7b3"
I0213 15:59:17.564814 10919 logs.go:123] Gathering logs for kube-controller-manager [b2bc8bfc1796] ...
I0213 15:59:17.564827 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2bc8bfc1796"
I0213 15:59:17.592379 10919 logs.go:123] Gathering logs for storage-provisioner [d2fb13268178] ...
I0213 15:59:17.592393 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2fb13268178"
I0213 15:59:17.606789 10919 logs.go:123] Gathering logs for kubernetes-dashboard [9ad73b9e5c82] ...
I0213 15:59:17.606803 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ad73b9e5c82"
I0213 15:59:17.622755 10919 logs.go:123] Gathering logs for dmesg ...
I0213 15:59:17.622771 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0213 15:59:17.631637 10919 logs.go:123] Gathering logs for kube-apiserver [69aac7de960e] ...
I0213 15:59:17.631649 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69aac7de960e"
I0213 15:59:17.665166 10919 logs.go:123] Gathering logs for coredns [dde67a3d6462] ...
I0213 15:59:17.665180 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dde67a3d6462"
I0213 15:59:17.682809 10919 logs.go:123] Gathering logs for kube-scheduler [4a32e0095c50] ...
I0213 15:59:17.682823 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a32e0095c50"
I0213 15:59:17.701872 10919 logs.go:123] Gathering logs for Docker ...
I0213 15:59:17.701886 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0213 15:59:17.739442 10919 logs.go:123] Gathering logs for kubelet ...
I0213 15:59:17.739459 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0213 15:59:20.291259 10919 api_server.go:253] Checking apiserver healthz at https://192.169.0.44:8444/healthz ...
I0213 15:59:25.292441 10919 api_server.go:269] stopped: https://192.169.0.44:8444/healthz: Get "https://192.169.0.44:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0213 15:59:25.292614 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0213 15:59:25.307677 10919 logs.go:276] 2 containers: [67b5fd979937 69aac7de960e]
I0213 15:59:25.307751 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0213 15:59:25.320612 10919 logs.go:276] 2 containers: [8e7930e1af27 33e10bdbe674]
I0213 15:59:25.320688 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0213 15:59:25.334187 10919 logs.go:276] 2 containers: [dde67a3d6462 cf10ab2a5821]
I0213 15:59:25.334261 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0213 15:59:25.351822 10919 logs.go:276] 2 containers: [4a32e0095c50 7516f7e0a7b3]
I0213 15:59:25.351901 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0213 15:59:25.365474 10919 logs.go:276] 2 containers: [3a2c33446232 166517917ebe]
I0213 15:59:25.365551 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0213 15:59:25.379300 10919 logs.go:276] 2 containers: [b2bc8bfc1796 f753753d8cc1]
I0213 15:59:25.379380 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0213 15:59:25.393447 10919 logs.go:276] 0 containers: []
W0213 15:59:25.393460 10919 logs.go:278] No container was found matching "kindnet"
I0213 15:59:25.393522 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0213 15:59:25.407596 10919 logs.go:276] 1 containers: [9ad73b9e5c82]
I0213 15:59:25.407669 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0213 15:59:25.421164 10919 logs.go:276] 2 containers: [35631bb46ae0 d2fb13268178]
I0213 15:59:25.421184 10919 logs.go:123] Gathering logs for etcd [8e7930e1af27] ...
I0213 15:59:25.421196 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7930e1af27"
I0213 15:59:25.440505 10919 logs.go:123] Gathering logs for etcd [33e10bdbe674] ...
I0213 15:59:25.440519 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33e10bdbe674"
I0213 15:59:25.461812 10919 logs.go:123] Gathering logs for coredns [cf10ab2a5821] ...
I0213 15:59:25.461825 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10ab2a5821"
I0213 15:59:25.477201 10919 logs.go:123] Gathering logs for kube-scheduler [7516f7e0a7b3] ...
I0213 15:59:25.477215 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7516f7e0a7b3"
I0213 15:59:25.498466 10919 logs.go:123] Gathering logs for kube-controller-manager [f753753d8cc1] ...
I0213 15:59:25.498482 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f753753d8cc1"
I0213 15:59:25.523699 10919 logs.go:123] Gathering logs for kubelet ...
I0213 15:59:25.523713 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0213 15:59:25.571808 10919 logs.go:123] Gathering logs for coredns [dde67a3d6462] ...
I0213 15:59:25.571822 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dde67a3d6462"
I0213 15:59:25.586771 10919 logs.go:123] Gathering logs for kubernetes-dashboard [9ad73b9e5c82] ...
I0213 15:59:25.586786 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ad73b9e5c82"
I0213 15:59:25.602949 10919 logs.go:123] Gathering logs for storage-provisioner [d2fb13268178] ...
I0213 15:59:25.602962 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2fb13268178"
I0213 15:59:25.617591 10919 logs.go:123] Gathering logs for container status ...
I0213 15:59:25.617605 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0213 15:59:25.664638 10919 logs.go:123] Gathering logs for kube-controller-manager [b2bc8bfc1796] ...
I0213 15:59:25.664653 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2bc8bfc1796"
I0213 15:59:25.691945 10919 logs.go:123] Gathering logs for dmesg ...
I0213 15:59:25.691959 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0213 15:59:25.701086 10919 logs.go:123] Gathering logs for describe nodes ...
I0213 15:59:25.701097 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0213 15:59:25.786038 10919 logs.go:123] Gathering logs for kube-apiserver [67b5fd979937] ...
I0213 15:59:25.786053 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b5fd979937"
I0213 15:59:25.809714 10919 logs.go:123] Gathering logs for kube-apiserver [69aac7de960e] ...
I0213 15:59:25.809729 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69aac7de960e"
I0213 15:59:25.843727 10919 logs.go:123] Gathering logs for kube-scheduler [4a32e0095c50] ...
I0213 15:59:25.843744 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a32e0095c50"
I0213 15:59:25.859682 10919 logs.go:123] Gathering logs for kube-proxy [3a2c33446232] ...
I0213 15:59:25.859696 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a2c33446232"
I0213 15:59:25.876060 10919 logs.go:123] Gathering logs for kube-proxy [166517917ebe] ...
I0213 15:59:25.876073 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 166517917ebe"
I0213 15:59:25.892578 10919 logs.go:123] Gathering logs for storage-provisioner [35631bb46ae0] ...
I0213 15:59:25.892593 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35631bb46ae0"
I0213 15:59:25.909618 10919 logs.go:123] Gathering logs for Docker ...
I0213 15:59:25.909632 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0213 15:59:28.448708 10919 api_server.go:253] Checking apiserver healthz at https://192.169.0.44:8444/healthz ...
I0213 15:59:33.449449 10919 api_server.go:269] stopped: https://192.169.0.44:8444/healthz: Get "https://192.169.0.44:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0213 15:59:33.449662 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0213 15:59:33.464758 10919 logs.go:276] 2 containers: [67b5fd979937 69aac7de960e]
I0213 15:59:33.464832 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0213 15:59:33.477602 10919 logs.go:276] 2 containers: [8e7930e1af27 33e10bdbe674]
I0213 15:59:33.477681 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0213 15:59:33.490971 10919 logs.go:276] 2 containers: [dde67a3d6462 cf10ab2a5821]
I0213 15:59:33.491050 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0213 15:59:33.504512 10919 logs.go:276] 2 containers: [4a32e0095c50 7516f7e0a7b3]
I0213 15:59:33.504585 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0213 15:59:33.517341 10919 logs.go:276] 2 containers: [3a2c33446232 166517917ebe]
I0213 15:59:33.517418 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0213 15:59:33.532977 10919 logs.go:276] 2 containers: [b2bc8bfc1796 f753753d8cc1]
I0213 15:59:33.533053 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0213 15:59:33.546453 10919 logs.go:276] 0 containers: []
W0213 15:59:33.546467 10919 logs.go:278] No container was found matching "kindnet"
I0213 15:59:33.546530 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0213 15:59:33.559697 10919 logs.go:276] 1 containers: [9ad73b9e5c82]
I0213 15:59:33.559775 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0213 15:59:33.575898 10919 logs.go:276] 2 containers: [35631bb46ae0 d2fb13268178]
I0213 15:59:33.575916 10919 logs.go:123] Gathering logs for kubelet ...
I0213 15:59:33.575923 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0213 15:59:33.622128 10919 logs.go:123] Gathering logs for kube-apiserver [69aac7de960e] ...
I0213 15:59:33.622142 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69aac7de960e"
I0213 15:59:33.664103 10919 logs.go:123] Gathering logs for storage-provisioner [35631bb46ae0] ...
I0213 15:59:33.664118 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35631bb46ae0"
I0213 15:59:33.679088 10919 logs.go:123] Gathering logs for etcd [8e7930e1af27] ...
I0213 15:59:33.679102 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7930e1af27"
I0213 15:59:33.701479 10919 logs.go:123] Gathering logs for kube-controller-manager [b2bc8bfc1796] ...
I0213 15:59:33.701494 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2bc8bfc1796"
I0213 15:59:33.729572 10919 logs.go:123] Gathering logs for kube-controller-manager [f753753d8cc1] ...
I0213 15:59:33.729586 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f753753d8cc1"
I0213 15:59:33.759269 10919 logs.go:123] Gathering logs for coredns [cf10ab2a5821] ...
I0213 15:59:33.759286 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10ab2a5821"
I0213 15:59:33.773869 10919 logs.go:123] Gathering logs for kube-scheduler [4a32e0095c50] ...
I0213 15:59:33.773882 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a32e0095c50"
I0213 15:59:33.788257 10919 logs.go:123] Gathering logs for kube-scheduler [7516f7e0a7b3] ...
I0213 15:59:33.788271 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7516f7e0a7b3"
I0213 15:59:33.808152 10919 logs.go:123] Gathering logs for kubernetes-dashboard [9ad73b9e5c82] ...
I0213 15:59:33.808165 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ad73b9e5c82"
I0213 15:59:33.824757 10919 logs.go:123] Gathering logs for Docker ...
I0213 15:59:33.824771 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0213 15:59:33.863852 10919 logs.go:123] Gathering logs for dmesg ...
I0213 15:59:33.863868 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0213 15:59:33.873539 10919 logs.go:123] Gathering logs for describe nodes ...
I0213 15:59:33.873551 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0213 15:59:33.954511 10919 logs.go:123] Gathering logs for kube-apiserver [67b5fd979937] ...
I0213 15:59:33.954526 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b5fd979937"
I0213 15:59:33.979251 10919 logs.go:123] Gathering logs for container status ...
I0213 15:59:33.979265 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0213 15:59:34.034034 10919 logs.go:123] Gathering logs for kube-proxy [166517917ebe] ...
I0213 15:59:34.034048 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 166517917ebe"
I0213 15:59:34.053456 10919 logs.go:123] Gathering logs for storage-provisioner [d2fb13268178] ...
I0213 15:59:34.053470 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2fb13268178"
I0213 15:59:34.068914 10919 logs.go:123] Gathering logs for etcd [33e10bdbe674] ...
I0213 15:59:34.068928 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33e10bdbe674"
I0213 15:59:34.088965 10919 logs.go:123] Gathering logs for coredns [dde67a3d6462] ...
I0213 15:59:34.088978 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dde67a3d6462"
I0213 15:59:34.108486 10919 logs.go:123] Gathering logs for kube-proxy [3a2c33446232] ...
I0213 15:59:34.108503 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a2c33446232"
I0213 15:59:36.624047 10919 api_server.go:253] Checking apiserver healthz at https://192.169.0.44:8444/healthz ...
I0213 15:59:41.624465 10919 api_server.go:269] stopped: https://192.169.0.44:8444/healthz: Get "https://192.169.0.44:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0213 15:59:41.624739 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0213 15:59:41.639576 10919 logs.go:276] 2 containers: [67b5fd979937 69aac7de960e]
I0213 15:59:41.639652 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0213 15:59:41.653332 10919 logs.go:276] 2 containers: [8e7930e1af27 33e10bdbe674]
I0213 15:59:41.653408 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0213 15:59:41.666462 10919 logs.go:276] 2 containers: [dde67a3d6462 cf10ab2a5821]
I0213 15:59:41.666538 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0213 15:59:41.679683 10919 logs.go:276] 2 containers: [4a32e0095c50 7516f7e0a7b3]
I0213 15:59:41.679765 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0213 15:59:41.694835 10919 logs.go:276] 2 containers: [3a2c33446232 166517917ebe]
I0213 15:59:41.694912 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0213 15:59:41.708619 10919 logs.go:276] 2 containers: [b2bc8bfc1796 f753753d8cc1]
I0213 15:59:41.708699 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0213 15:59:41.721413 10919 logs.go:276] 0 containers: []
W0213 15:59:41.721427 10919 logs.go:278] No container was found matching "kindnet"
I0213 15:59:41.721491 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0213 15:59:41.734194 10919 logs.go:276] 2 containers: [35631bb46ae0 d2fb13268178]
I0213 15:59:41.734266 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0213 15:59:41.751013 10919 logs.go:276] 1 containers: [9ad73b9e5c82]
I0213 15:59:41.751032 10919 logs.go:123] Gathering logs for etcd [8e7930e1af27] ...
I0213 15:59:41.751039 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7930e1af27"
I0213 15:59:41.769118 10919 logs.go:123] Gathering logs for etcd [33e10bdbe674] ...
I0213 15:59:41.769137 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33e10bdbe674"
I0213 15:59:41.788721 10919 logs.go:123] Gathering logs for coredns [dde67a3d6462] ...
I0213 15:59:41.788736 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dde67a3d6462"
I0213 15:59:41.803708 10919 logs.go:123] Gathering logs for kube-proxy [3a2c33446232] ...
I0213 15:59:41.803723 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a2c33446232"
I0213 15:59:41.818957 10919 logs.go:123] Gathering logs for kube-proxy [166517917ebe] ...
I0213 15:59:41.818973 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 166517917ebe"
I0213 15:59:41.835913 10919 logs.go:123] Gathering logs for kube-controller-manager [b2bc8bfc1796] ...
I0213 15:59:41.835928 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2bc8bfc1796"
I0213 15:59:41.867520 10919 logs.go:123] Gathering logs for kube-controller-manager [f753753d8cc1] ...
I0213 15:59:41.867534 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f753753d8cc1"
I0213 15:59:41.892210 10919 logs.go:123] Gathering logs for kube-apiserver [69aac7de960e] ...
I0213 15:59:41.892224 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69aac7de960e"
I0213 15:59:41.927167 10919 logs.go:123] Gathering logs for Docker ...
I0213 15:59:41.927182 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0213 15:59:41.964858 10919 logs.go:123] Gathering logs for dmesg ...
I0213 15:59:41.964873 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0213 15:59:41.974308 10919 logs.go:123] Gathering logs for coredns [cf10ab2a5821] ...
I0213 15:59:41.974320 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10ab2a5821"
I0213 15:59:41.989254 10919 logs.go:123] Gathering logs for storage-provisioner [35631bb46ae0] ...
I0213 15:59:41.989269 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35631bb46ae0"
I0213 15:59:42.004073 10919 logs.go:123] Gathering logs for storage-provisioner [d2fb13268178] ...
I0213 15:59:42.004088 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2fb13268178"
I0213 15:59:42.018622 10919 logs.go:123] Gathering logs for kubernetes-dashboard [9ad73b9e5c82] ...
I0213 15:59:42.018637 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ad73b9e5c82"
I0213 15:59:42.033669 10919 logs.go:123] Gathering logs for container status ...
I0213 15:59:42.033683 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0213 15:59:42.086357 10919 logs.go:123] Gathering logs for kubelet ...
I0213 15:59:42.086372 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0213 15:59:42.138828 10919 logs.go:123] Gathering logs for describe nodes ...
I0213 15:59:42.138847 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0213 15:59:42.277127 10919 logs.go:123] Gathering logs for kube-apiserver [67b5fd979937] ...
I0213 15:59:42.277142 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b5fd979937"
I0213 15:59:42.300124 10919 logs.go:123] Gathering logs for kube-scheduler [4a32e0095c50] ...
I0213 15:59:42.300139 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a32e0095c50"
I0213 15:59:42.315570 10919 logs.go:123] Gathering logs for kube-scheduler [7516f7e0a7b3] ...
I0213 15:59:42.315585 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7516f7e0a7b3"
I0213 15:59:44.835830 10919 api_server.go:253] Checking apiserver healthz at https://192.169.0.44:8444/healthz ...
I0213 15:59:49.836694 10919 api_server.go:269] stopped: https://192.169.0.44:8444/healthz: Get "https://192.169.0.44:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0213 15:59:49.836877 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0213 15:59:49.855913 10919 logs.go:276] 2 containers: [67b5fd979937 69aac7de960e]
I0213 15:59:49.855994 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0213 15:59:49.869161 10919 logs.go:276] 2 containers: [8e7930e1af27 33e10bdbe674]
I0213 15:59:49.869239 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0213 15:59:49.882361 10919 logs.go:276] 2 containers: [dde67a3d6462 cf10ab2a5821]
I0213 15:59:49.882437 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0213 15:59:49.895780 10919 logs.go:276] 2 containers: [4a32e0095c50 7516f7e0a7b3]
I0213 15:59:49.895856 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0213 15:59:49.909340 10919 logs.go:276] 2 containers: [3a2c33446232 166517917ebe]
I0213 15:59:49.909419 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0213 15:59:49.922658 10919 logs.go:276] 2 containers: [b2bc8bfc1796 f753753d8cc1]
I0213 15:59:49.922732 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0213 15:59:49.935216 10919 logs.go:276] 0 containers: []
W0213 15:59:49.935229 10919 logs.go:278] No container was found matching "kindnet"
I0213 15:59:49.935294 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0213 15:59:49.948440 10919 logs.go:276] 1 containers: [9ad73b9e5c82]
I0213 15:59:49.948515 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0213 15:59:49.961433 10919 logs.go:276] 2 containers: [35631bb46ae0 d2fb13268178]
I0213 15:59:49.961469 10919 logs.go:123] Gathering logs for coredns [cf10ab2a5821] ...
I0213 15:59:49.961477 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10ab2a5821"
I0213 15:59:49.975966 10919 logs.go:123] Gathering logs for kube-proxy [166517917ebe] ...
I0213 15:59:49.975981 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 166517917ebe"
I0213 15:59:49.992003 10919 logs.go:123] Gathering logs for kubernetes-dashboard [9ad73b9e5c82] ...
I0213 15:59:49.992017 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ad73b9e5c82"
I0213 15:59:50.013718 10919 logs.go:123] Gathering logs for Docker ...
I0213 15:59:50.013731 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0213 15:59:50.052271 10919 logs.go:123] Gathering logs for describe nodes ...
I0213 15:59:50.052289 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0213 15:59:50.131063 10919 logs.go:123] Gathering logs for etcd [8e7930e1af27] ...
I0213 15:59:50.131077 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7930e1af27"
I0213 15:59:50.151187 10919 logs.go:123] Gathering logs for etcd [33e10bdbe674] ...
I0213 15:59:50.151200 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33e10bdbe674"
I0213 15:59:50.171771 10919 logs.go:123] Gathering logs for storage-provisioner [d2fb13268178] ...
I0213 15:59:50.171785 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2fb13268178"
I0213 15:59:50.186696 10919 logs.go:123] Gathering logs for container status ...
I0213 15:59:50.186711 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0213 15:59:50.240751 10919 logs.go:123] Gathering logs for kubelet ...
I0213 15:59:50.240765 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0213 15:59:50.289458 10919 logs.go:123] Gathering logs for kube-apiserver [67b5fd979937] ...
I0213 15:59:50.289472 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b5fd979937"
I0213 15:59:50.311569 10919 logs.go:123] Gathering logs for kube-apiserver [69aac7de960e] ...
I0213 15:59:50.311583 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69aac7de960e"
I0213 15:59:50.348251 10919 logs.go:123] Gathering logs for storage-provisioner [35631bb46ae0] ...
I0213 15:59:50.348265 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35631bb46ae0"
I0213 15:59:50.364392 10919 logs.go:123] Gathering logs for kube-scheduler [4a32e0095c50] ...
I0213 15:59:50.364406 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a32e0095c50"
I0213 15:59:50.380265 10919 logs.go:123] Gathering logs for kube-proxy [3a2c33446232] ...
I0213 15:59:50.380279 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a2c33446232"
I0213 15:59:50.395547 10919 logs.go:123] Gathering logs for kube-controller-manager [f753753d8cc1] ...
I0213 15:59:50.395563 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f753753d8cc1"
I0213 15:59:50.421830 10919 logs.go:123] Gathering logs for kube-controller-manager [b2bc8bfc1796] ...
I0213 15:59:50.421845 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2bc8bfc1796"
I0213 15:59:50.448828 10919 logs.go:123] Gathering logs for dmesg ...
I0213 15:59:50.448842 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0213 15:59:50.457957 10919 logs.go:123] Gathering logs for coredns [dde67a3d6462] ...
I0213 15:59:50.457968 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dde67a3d6462"
I0213 15:59:50.471926 10919 logs.go:123] Gathering logs for kube-scheduler [7516f7e0a7b3] ...
I0213 15:59:50.471941 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7516f7e0a7b3"
I0213 15:59:52.991666 10919 api_server.go:253] Checking apiserver healthz at https://192.169.0.44:8444/healthz ...
I0213 15:59:57.992310 10919 api_server.go:269] stopped: https://192.169.0.44:8444/healthz: Get "https://192.169.0.44:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0213 15:59:57.992409 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0213 15:59:58.005371 10919 logs.go:276] 2 containers: [67b5fd979937 69aac7de960e]
I0213 15:59:58.005442 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0213 15:59:58.018712 10919 logs.go:276] 2 containers: [8e7930e1af27 33e10bdbe674]
I0213 15:59:58.018786 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0213 15:59:58.032644 10919 logs.go:276] 2 containers: [dde67a3d6462 cf10ab2a5821]
I0213 15:59:58.032718 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0213 15:59:58.049213 10919 logs.go:276] 2 containers: [4a32e0095c50 7516f7e0a7b3]
I0213 15:59:58.049290 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0213 15:59:58.062852 10919 logs.go:276] 2 containers: [3a2c33446232 166517917ebe]
I0213 15:59:58.062931 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0213 15:59:58.076707 10919 logs.go:276] 2 containers: [b2bc8bfc1796 f753753d8cc1]
I0213 15:59:58.076784 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0213 15:59:58.089760 10919 logs.go:276] 0 containers: []
W0213 15:59:58.089773 10919 logs.go:278] No container was found matching "kindnet"
I0213 15:59:58.089836 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0213 15:59:58.102591 10919 logs.go:276] 1 containers: [9ad73b9e5c82]
I0213 15:59:58.102665 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0213 15:59:58.115869 10919 logs.go:276] 2 containers: [35631bb46ae0 d2fb13268178]
I0213 15:59:58.115888 10919 logs.go:123] Gathering logs for coredns [cf10ab2a5821] ...
I0213 15:59:58.115894 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10ab2a5821"
I0213 15:59:58.130637 10919 logs.go:123] Gathering logs for kube-controller-manager [f753753d8cc1] ...
I0213 15:59:58.130651 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f753753d8cc1"
I0213 15:59:58.157833 10919 logs.go:123] Gathering logs for kubernetes-dashboard [9ad73b9e5c82] ...
I0213 15:59:58.157848 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ad73b9e5c82"
I0213 15:59:58.175159 10919 logs.go:123] Gathering logs for Docker ...
I0213 15:59:58.175172 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0213 15:59:58.215113 10919 logs.go:123] Gathering logs for kube-apiserver [67b5fd979937] ...
I0213 15:59:58.215127 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b5fd979937"
I0213 15:59:58.238295 10919 logs.go:123] Gathering logs for etcd [8e7930e1af27] ...
I0213 15:59:58.238310 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7930e1af27"
I0213 15:59:58.258695 10919 logs.go:123] Gathering logs for kube-apiserver [69aac7de960e] ...
I0213 15:59:58.258711 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69aac7de960e"
I0213 15:59:58.296331 10919 logs.go:123] Gathering logs for coredns [dde67a3d6462] ...
I0213 15:59:58.296345 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dde67a3d6462"
I0213 15:59:58.311062 10919 logs.go:123] Gathering logs for kube-scheduler [4a32e0095c50] ...
I0213 15:59:58.311075 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a32e0095c50"
I0213 15:59:58.325980 10919 logs.go:123] Gathering logs for kube-proxy [3a2c33446232] ...
I0213 15:59:58.325995 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a2c33446232"
I0213 15:59:58.342720 10919 logs.go:123] Gathering logs for kube-proxy [166517917ebe] ...
I0213 15:59:58.342735 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 166517917ebe"
I0213 15:59:58.359299 10919 logs.go:123] Gathering logs for kube-controller-manager [b2bc8bfc1796] ...
I0213 15:59:58.359312 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2bc8bfc1796"
I0213 15:59:58.385858 10919 logs.go:123] Gathering logs for kubelet ...
I0213 15:59:58.385870 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0213 15:59:58.433447 10919 logs.go:123] Gathering logs for dmesg ...
I0213 15:59:58.433463 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0213 15:59:58.443537 10919 logs.go:123] Gathering logs for container status ...
I0213 15:59:58.443551 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0213 15:59:58.498490 10919 logs.go:123] Gathering logs for storage-provisioner [35631bb46ae0] ...
I0213 15:59:58.498503 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35631bb46ae0"
I0213 15:59:58.512792 10919 logs.go:123] Gathering logs for storage-provisioner [d2fb13268178] ...
I0213 15:59:58.512806 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2fb13268178"
I0213 15:59:58.526963 10919 logs.go:123] Gathering logs for kube-scheduler [7516f7e0a7b3] ...
I0213 15:59:58.526976 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7516f7e0a7b3"
I0213 15:59:58.546801 10919 logs.go:123] Gathering logs for describe nodes ...
I0213 15:59:58.546814 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0213 15:59:58.633701 10919 logs.go:123] Gathering logs for etcd [33e10bdbe674] ...
I0213 15:59:58.633716 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33e10bdbe674"
I0213 16:00:01.157306 10919 api_server.go:253] Checking apiserver healthz at https://192.169.0.44:8444/healthz ...
I0213 16:00:06.158921 10919 api_server.go:269] stopped: https://192.169.0.44:8444/healthz: Get "https://192.169.0.44:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0213 16:00:06.159044 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0213 16:00:06.173328 10919 logs.go:276] 2 containers: [67b5fd979937 69aac7de960e]
I0213 16:00:06.173405 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0213 16:00:06.186401 10919 logs.go:276] 2 containers: [8e7930e1af27 33e10bdbe674]
I0213 16:00:06.186471 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0213 16:00:06.198890 10919 logs.go:276] 2 containers: [dde67a3d6462 cf10ab2a5821]
I0213 16:00:06.198968 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0213 16:00:06.211942 10919 logs.go:276] 2 containers: [4a32e0095c50 7516f7e0a7b3]
I0213 16:00:06.212029 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0213 16:00:06.225464 10919 logs.go:276] 2 containers: [3a2c33446232 166517917ebe]
I0213 16:00:06.225544 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0213 16:00:06.243552 10919 logs.go:276] 2 containers: [b2bc8bfc1796 f753753d8cc1]
I0213 16:00:06.243626 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0213 16:00:06.257195 10919 logs.go:276] 0 containers: []
W0213 16:00:06.257208 10919 logs.go:278] No container was found matching "kindnet"
I0213 16:00:06.257273 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0213 16:00:06.270368 10919 logs.go:276] 1 containers: [9ad73b9e5c82]
I0213 16:00:06.270440 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0213 16:00:06.283964 10919 logs.go:276] 2 containers: [35631bb46ae0 d2fb13268178]
I0213 16:00:06.283981 10919 logs.go:123] Gathering logs for kube-scheduler [7516f7e0a7b3] ...
I0213 16:00:06.283988 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7516f7e0a7b3"
I0213 16:00:06.304118 10919 logs.go:123] Gathering logs for kube-proxy [3a2c33446232] ...
I0213 16:00:06.304133 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a2c33446232"
I0213 16:00:06.319033 10919 logs.go:123] Gathering logs for storage-provisioner [d2fb13268178] ...
I0213 16:00:06.319047 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2fb13268178"
I0213 16:00:06.338845 10919 logs.go:123] Gathering logs for dmesg ...
I0213 16:00:06.338858 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0213 16:00:06.347987 10919 logs.go:123] Gathering logs for describe nodes ...
I0213 16:00:06.347999 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0213 16:00:06.429064 10919 logs.go:123] Gathering logs for kube-apiserver [69aac7de960e] ...
I0213 16:00:06.429079 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69aac7de960e"
I0213 16:00:06.463639 10919 logs.go:123] Gathering logs for kube-proxy [166517917ebe] ...
I0213 16:00:06.463652 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 166517917ebe"
I0213 16:00:06.479527 10919 logs.go:123] Gathering logs for kube-controller-manager [b2bc8bfc1796] ...
I0213 16:00:06.479542 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2bc8bfc1796"
I0213 16:00:06.511584 10919 logs.go:123] Gathering logs for kubernetes-dashboard [9ad73b9e5c82] ...
I0213 16:00:06.511598 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ad73b9e5c82"
I0213 16:00:06.527654 10919 logs.go:123] Gathering logs for coredns [dde67a3d6462] ...
I0213 16:00:06.527669 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dde67a3d6462"
I0213 16:00:06.543505 10919 logs.go:123] Gathering logs for coredns [cf10ab2a5821] ...
I0213 16:00:06.543519 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10ab2a5821"
I0213 16:00:06.558298 10919 logs.go:123] Gathering logs for kube-scheduler [4a32e0095c50] ...
I0213 16:00:06.558312 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a32e0095c50"
I0213 16:00:06.573733 10919 logs.go:123] Gathering logs for kube-controller-manager [f753753d8cc1] ...
I0213 16:00:06.573747 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f753753d8cc1"
I0213 16:00:06.602373 10919 logs.go:123] Gathering logs for storage-provisioner [35631bb46ae0] ...
I0213 16:00:06.602387 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35631bb46ae0"
I0213 16:00:06.622084 10919 logs.go:123] Gathering logs for kubelet ...
I0213 16:00:06.622102 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0213 16:00:06.671201 10919 logs.go:123] Gathering logs for kube-apiserver [67b5fd979937] ...
I0213 16:00:06.671217 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b5fd979937"
I0213 16:00:06.694199 10919 logs.go:123] Gathering logs for etcd [33e10bdbe674] ...
I0213 16:00:06.694213 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33e10bdbe674"
I0213 16:00:06.713350 10919 logs.go:123] Gathering logs for container status ...
I0213 16:00:06.713365 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0213 16:00:06.766151 10919 logs.go:123] Gathering logs for etcd [8e7930e1af27] ...
I0213 16:00:06.766165 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7930e1af27"
I0213 16:00:06.784659 10919 logs.go:123] Gathering logs for Docker ...
I0213 16:00:06.784674 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0213 16:00:09.324344 10919 api_server.go:253] Checking apiserver healthz at https://192.169.0.44:8444/healthz ...
I0213 16:00:14.325187 10919 api_server.go:269] stopped: https://192.169.0.44:8444/healthz: Get "https://192.169.0.44:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0213 16:00:14.325327 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0213 16:00:14.340650 10919 logs.go:276] 2 containers: [67b5fd979937 69aac7de960e]
I0213 16:00:14.340725 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0213 16:00:14.354479 10919 logs.go:276] 2 containers: [8e7930e1af27 33e10bdbe674]
I0213 16:00:14.354551 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0213 16:00:14.367494 10919 logs.go:276] 2 containers: [dde67a3d6462 cf10ab2a5821]
I0213 16:00:14.367569 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0213 16:00:14.381515 10919 logs.go:276] 2 containers: [4a32e0095c50 7516f7e0a7b3]
I0213 16:00:14.381590 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0213 16:00:14.399300 10919 logs.go:276] 2 containers: [3a2c33446232 166517917ebe]
I0213 16:00:14.399378 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0213 16:00:14.412334 10919 logs.go:276] 2 containers: [b2bc8bfc1796 f753753d8cc1]
I0213 16:00:14.412410 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0213 16:00:14.425303 10919 logs.go:276] 0 containers: []
W0213 16:00:14.425316 10919 logs.go:278] No container was found matching "kindnet"
I0213 16:00:14.425376 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0213 16:00:14.439051 10919 logs.go:276] 2 containers: [35631bb46ae0 d2fb13268178]
I0213 16:00:14.439125 10919 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0213 16:00:14.452879 10919 logs.go:276] 1 containers: [9ad73b9e5c82]
I0213 16:00:14.452898 10919 logs.go:123] Gathering logs for kube-apiserver [69aac7de960e] ...
I0213 16:00:14.452905 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69aac7de960e"
I0213 16:00:14.486017 10919 logs.go:123] Gathering logs for etcd [8e7930e1af27] ...
I0213 16:00:14.486032 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e7930e1af27"
I0213 16:00:14.508992 10919 logs.go:123] Gathering logs for kube-proxy [166517917ebe] ...
I0213 16:00:14.509007 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 166517917ebe"
I0213 16:00:14.524204 10919 logs.go:123] Gathering logs for kubernetes-dashboard [9ad73b9e5c82] ...
I0213 16:00:14.524218 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ad73b9e5c82"
I0213 16:00:14.540398 10919 logs.go:123] Gathering logs for container status ...
I0213 16:00:14.540414 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0213 16:00:14.586321 10919 logs.go:123] Gathering logs for kubelet ...
I0213 16:00:14.586338 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0213 16:00:14.635342 10919 logs.go:123] Gathering logs for dmesg ...
I0213 16:00:14.635359 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0213 16:00:14.645134 10919 logs.go:123] Gathering logs for describe nodes ...
I0213 16:00:14.645145 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0213 16:00:14.727827 10919 logs.go:123] Gathering logs for kube-scheduler [7516f7e0a7b3] ...
I0213 16:00:14.727841 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7516f7e0a7b3"
I0213 16:00:14.754543 10919 logs.go:123] Gathering logs for kube-controller-manager [b2bc8bfc1796] ...
I0213 16:00:14.754557 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2bc8bfc1796"
I0213 16:00:14.781912 10919 logs.go:123] Gathering logs for kube-controller-manager [f753753d8cc1] ...
I0213 16:00:14.781926 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f753753d8cc1"
I0213 16:00:14.806969 10919 logs.go:123] Gathering logs for Docker ...
I0213 16:00:14.806982 10919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0213 16:00:14.844711 10919 logs.go:123] Gathering logs for kube-apiserver [67b5fd979937] ...
I0213 16:00:14.844726 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67b5fd979937"
I0213 16:00:14.867276 10919 logs.go:123] Gathering logs for coredns [dde67a3d6462] ...
I0213 16:00:14.867290 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dde67a3d6462"
I0213 16:00:14.882034 10919 logs.go:123] Gathering logs for coredns [cf10ab2a5821] ...
I0213 16:00:14.882051 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10ab2a5821"
I0213 16:00:14.896809 10919 logs.go:123] Gathering logs for kube-proxy [3a2c33446232] ...
I0213 16:00:14.896828 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a2c33446232"
I0213 16:00:14.912066 10919 logs.go:123] Gathering logs for etcd [33e10bdbe674] ...
I0213 16:00:14.912081 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33e10bdbe674"
I0213 16:00:14.935538 10919 logs.go:123] Gathering logs for kube-scheduler [4a32e0095c50] ...
I0213 16:00:14.935551 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a32e0095c50"
I0213 16:00:14.951046 10919 logs.go:123] Gathering logs for storage-provisioner [35631bb46ae0] ...
I0213 16:00:14.951059 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35631bb46ae0"
I0213 16:00:14.965671 10919 logs.go:123] Gathering logs for storage-provisioner [d2fb13268178] ...
I0213 16:00:14.965685 10919 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2fb13268178"
I0213 16:00:17.481145 10919 api_server.go:253] Checking apiserver healthz at https://192.169.0.44:8444/healthz ...
I0213 16:00:22.481742 10919 api_server.go:269] stopped: https://192.169.0.44:8444/healthz: Get "https://192.169.0.44:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0213 16:00:22.503750 10919 out.go:177]
W0213 16:00:22.524657 10919 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
W0213 16:00:22.524675 10919 out.go:239] *
*
W0213 16:00:22.525784 10919 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0213 16:00:22.603612 10919 out.go:177]
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p default-k8s-diff-port-603000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-603000 -n default-k8s-diff-port-603000
E0213 16:00:24.488409 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/enable-default-cni-599000/client.crt: no such file or directory
E0213 16:00:25.212602 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/flannel-599000/client.crt: no such file or directory
E0213 16:00:26.510815 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/auto-599000/client.crt: no such file or directory
E0213 16:00:46.811257 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/kindnet-599000/client.crt: no such file or directory
E0213 16:01:07.850914 3342 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18169-2790/.minikube/profiles/bridge-599000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-603000 -n default-k8s-diff-port-603000: exit status 3 (1m15.090431593s)
-- stdout --
Error
-- /stdout --
** stderr **
E0213 16:01:37.817996 11103 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.169.0.44:22: connect: operation timed out
E0213 16:01:37.818019 11103 status.go:249] status error: NewSession: new client: new client: dial tcp 192.169.0.44:22: connect: operation timed out
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-603000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (478.67s)