=== RUN TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run: out/minikube-linux-amd64 start -p embed-certs-553677 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.32.0
E0120 14:00:16.717686 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/kindnet-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:22.368324 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/calico-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:22.374833 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/calico-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:22.386364 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/calico-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:22.407921 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/calico-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:22.449448 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/calico-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:22.531007 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/calico-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:22.692542 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/calico-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:23.014386 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/calico-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:23.656528 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/calico-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:24.938484 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/calico-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:27.401984 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/auto-838971/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:00:27.500470 1006263 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/calico-838971/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p embed-certs-553677 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.32.0: signal: killed (27m0.412647973s)
-- stdout --
* [embed-certs-553677] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=20242
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/20242-998973/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-998973/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the kvm2 driver based on existing profile
* Starting "embed-certs-553677" primary control-plane node in "embed-certs-553677" cluster
* Restarting existing kvm2 VM for "embed-certs-553677" ...
* Preparing Kubernetes v1.32.0 on containerd 1.7.23 ...
* Configuring bridge CNI (Container Networking Interface) ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image fake.domain/registry.k8s.io/echoserver:1.4
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
- Using image docker.io/kubernetesui/dashboard:v2.7.0
- Using image registry.k8s.io/echoserver:1.4
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p embed-certs-553677 addons enable metrics-server
* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
-- /stdout --
** stderr **
I0120 14:00:09.136331 1060798 out.go:345] Setting OutFile to fd 1 ...
I0120 14:00:09.136455 1060798 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 14:00:09.136464 1060798 out.go:358] Setting ErrFile to fd 2...
I0120 14:00:09.136469 1060798 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 14:00:09.136684 1060798 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-998973/.minikube/bin
I0120 14:00:09.137260 1060798 out.go:352] Setting JSON to false
I0120 14:00:09.138235 1060798 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":13351,"bootTime":1737368258,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0120 14:00:09.138350 1060798 start.go:139] virtualization: kvm guest
I0120 14:00:09.140578 1060798 out.go:177] * [embed-certs-553677] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
I0120 14:00:09.142079 1060798 out.go:177] - MINIKUBE_LOCATION=20242
I0120 14:00:09.142074 1060798 notify.go:220] Checking for updates...
I0120 14:00:09.143562 1060798 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0120 14:00:09.144993 1060798 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20242-998973/kubeconfig
I0120 14:00:09.146404 1060798 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-998973/.minikube
I0120 14:00:09.147692 1060798 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0120 14:00:09.148998 1060798 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0120 14:00:09.150691 1060798 config.go:182] Loaded profile config "embed-certs-553677": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 14:00:09.151141 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 14:00:09.151189 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 14:00:09.166623 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43411
I0120 14:00:09.167122 1060798 main.go:141] libmachine: () Calling .GetVersion
I0120 14:00:09.167718 1060798 main.go:141] libmachine: Using API Version 1
I0120 14:00:09.167742 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 14:00:09.168137 1060798 main.go:141] libmachine: () Calling .GetMachineName
I0120 14:00:09.168428 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .DriverName
I0120 14:00:09.168757 1060798 driver.go:394] Setting default libvirt URI to qemu:///system
I0120 14:00:09.169233 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 14:00:09.169310 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 14:00:09.184559 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34633
I0120 14:00:09.185140 1060798 main.go:141] libmachine: () Calling .GetVersion
I0120 14:00:09.185701 1060798 main.go:141] libmachine: Using API Version 1
I0120 14:00:09.185731 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 14:00:09.186076 1060798 main.go:141] libmachine: () Calling .GetMachineName
I0120 14:00:09.186290 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .DriverName
I0120 14:00:09.227893 1060798 out.go:177] * Using the kvm2 driver based on existing profile
I0120 14:00:09.229385 1060798 start.go:297] selected driver: kvm2
I0120 14:00:09.229408 1060798 start.go:901] validating driver "kvm2" against &{Name:embed-certs-553677 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-553677 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.136 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0120 14:00:09.229531 1060798 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0120 14:00:09.230237 1060798 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 14:00:09.230337 1060798 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20242-998973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0120 14:00:09.247147 1060798 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
I0120 14:00:09.247587 1060798 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0120 14:00:09.247634 1060798 cni.go:84] Creating CNI manager for ""
I0120 14:00:09.247685 1060798 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0120 14:00:09.247721 1060798 start.go:340] cluster config:
{Name:embed-certs-553677 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-553677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.136 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0120 14:00:09.247834 1060798 iso.go:125] acquiring lock: {Name:mk63965bcac7e5d2166c667dd03e4270f636bd53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 14:00:09.249884 1060798 out.go:177] * Starting "embed-certs-553677" primary control-plane node in "embed-certs-553677" cluster
I0120 14:00:09.251254 1060798 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
I0120 14:00:09.251313 1060798 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20242-998973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-amd64.tar.lz4
I0120 14:00:09.251326 1060798 cache.go:56] Caching tarball of preloaded images
I0120 14:00:09.251426 1060798 preload.go:172] Found /home/jenkins/minikube-integration/20242-998973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0120 14:00:09.251437 1060798 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on containerd
I0120 14:00:09.251541 1060798 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/embed-certs-553677/config.json ...
I0120 14:00:09.251743 1060798 start.go:360] acquireMachinesLock for embed-certs-553677: {Name:mk36ae0f7b2d42a8734a6403f72836860fc4ccfa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0120 14:00:25.130435 1060798 start.go:364] duration metric: took 15.878602581s to acquireMachinesLock for "embed-certs-553677"
I0120 14:00:25.130512 1060798 start.go:96] Skipping create...Using existing machine configuration
I0120 14:00:25.130525 1060798 fix.go:54] fixHost starting:
I0120 14:00:25.130961 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 14:00:25.131024 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 14:00:25.151812 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34953
I0120 14:00:25.152266 1060798 main.go:141] libmachine: () Calling .GetVersion
I0120 14:00:25.152822 1060798 main.go:141] libmachine: Using API Version 1
I0120 14:00:25.152854 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 14:00:25.153234 1060798 main.go:141] libmachine: () Calling .GetMachineName
I0120 14:00:25.153468 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .DriverName
I0120 14:00:25.153642 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetState
I0120 14:00:25.155461 1060798 fix.go:112] recreateIfNeeded on embed-certs-553677: state=Stopped err=<nil>
I0120 14:00:25.155490 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .DriverName
W0120 14:00:25.155656 1060798 fix.go:138] unexpected machine state, will restart: <nil>
I0120 14:00:25.158121 1060798 out.go:177] * Restarting existing kvm2 VM for "embed-certs-553677" ...
I0120 14:00:25.159720 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .Start
I0120 14:00:25.159943 1060798 main.go:141] libmachine: (embed-certs-553677) starting domain...
I0120 14:00:25.159967 1060798 main.go:141] libmachine: (embed-certs-553677) ensuring networks are active...
I0120 14:00:25.160789 1060798 main.go:141] libmachine: (embed-certs-553677) Ensuring network default is active
I0120 14:00:25.161303 1060798 main.go:141] libmachine: (embed-certs-553677) Ensuring network mk-embed-certs-553677 is active
I0120 14:00:25.161800 1060798 main.go:141] libmachine: (embed-certs-553677) getting domain XML...
I0120 14:00:25.162593 1060798 main.go:141] libmachine: (embed-certs-553677) creating domain...
I0120 14:00:26.523284 1060798 main.go:141] libmachine: (embed-certs-553677) waiting for IP...
I0120 14:00:26.524408 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:26.524955 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | unable to find current IP address of domain embed-certs-553677 in network mk-embed-certs-553677
I0120 14:00:26.525074 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | I0120 14:00:26.524944 1060911 retry.go:31] will retry after 222.778825ms: waiting for domain to come up
I0120 14:00:26.749767 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:26.750528 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | unable to find current IP address of domain embed-certs-553677 in network mk-embed-certs-553677
I0120 14:00:26.750560 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | I0120 14:00:26.750483 1060911 retry.go:31] will retry after 239.249302ms: waiting for domain to come up
I0120 14:00:26.991082 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:26.991790 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | unable to find current IP address of domain embed-certs-553677 in network mk-embed-certs-553677
I0120 14:00:26.991837 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | I0120 14:00:26.991741 1060911 retry.go:31] will retry after 416.399646ms: waiting for domain to come up
I0120 14:00:27.844878 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:27.845488 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | unable to find current IP address of domain embed-certs-553677 in network mk-embed-certs-553677
I0120 14:00:27.845517 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | I0120 14:00:27.845470 1060911 retry.go:31] will retry after 470.570909ms: waiting for domain to come up
I0120 14:00:28.318025 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:28.318569 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | unable to find current IP address of domain embed-certs-553677 in network mk-embed-certs-553677
I0120 14:00:28.318616 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | I0120 14:00:28.318550 1060911 retry.go:31] will retry after 725.900803ms: waiting for domain to come up
I0120 14:00:29.046621 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:29.047238 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | unable to find current IP address of domain embed-certs-553677 in network mk-embed-certs-553677
I0120 14:00:29.047263 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | I0120 14:00:29.047192 1060911 retry.go:31] will retry after 590.863404ms: waiting for domain to come up
I0120 14:00:29.639513 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:29.640030 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | unable to find current IP address of domain embed-certs-553677 in network mk-embed-certs-553677
I0120 14:00:29.640060 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | I0120 14:00:29.639986 1060911 retry.go:31] will retry after 779.536692ms: waiting for domain to come up
I0120 14:00:30.421805 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:30.422403 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | unable to find current IP address of domain embed-certs-553677 in network mk-embed-certs-553677
I0120 14:00:30.422464 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | I0120 14:00:30.422385 1060911 retry.go:31] will retry after 1.137826076s: waiting for domain to come up
I0120 14:00:31.561820 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:31.562422 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | unable to find current IP address of domain embed-certs-553677 in network mk-embed-certs-553677
I0120 14:00:31.562449 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | I0120 14:00:31.562392 1060911 retry.go:31] will retry after 1.724582419s: waiting for domain to come up
I0120 14:00:33.289526 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:33.290221 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | unable to find current IP address of domain embed-certs-553677 in network mk-embed-certs-553677
I0120 14:00:33.290253 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | I0120 14:00:33.290164 1060911 retry.go:31] will retry after 1.979389937s: waiting for domain to come up
I0120 14:00:35.271040 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:35.271737 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | unable to find current IP address of domain embed-certs-553677 in network mk-embed-certs-553677
I0120 14:00:35.271771 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | I0120 14:00:35.271698 1060911 retry.go:31] will retry after 2.702719811s: waiting for domain to come up
I0120 14:00:37.975637 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:37.976177 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | unable to find current IP address of domain embed-certs-553677 in network mk-embed-certs-553677
I0120 14:00:37.976205 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | I0120 14:00:37.976144 1060911 retry.go:31] will retry after 2.907988017s: waiting for domain to come up
I0120 14:00:40.886071 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:40.886547 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | unable to find current IP address of domain embed-certs-553677 in network mk-embed-certs-553677
I0120 14:00:40.886579 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | I0120 14:00:40.886505 1060911 retry.go:31] will retry after 3.55226413s: waiting for domain to come up
I0120 14:00:44.788861 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:44.789567 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has current primary IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:44.789606 1060798 main.go:141] libmachine: (embed-certs-553677) found domain IP: 192.168.72.136
I0120 14:00:44.789620 1060798 main.go:141] libmachine: (embed-certs-553677) reserving static IP address...
I0120 14:00:44.790314 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "embed-certs-553677", mac: "52:54:00:7d:7a:fd", ip: "192.168.72.136"} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
I0120 14:00:44.790340 1060798 main.go:141] libmachine: (embed-certs-553677) reserved static IP address 192.168.72.136 for domain embed-certs-553677
I0120 14:00:44.790367 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | skip adding static IP to network mk-embed-certs-553677 - found existing host DHCP lease matching {name: "embed-certs-553677", mac: "52:54:00:7d:7a:fd", ip: "192.168.72.136"}
I0120 14:00:44.790394 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | Getting to WaitForSSH function...
I0120 14:00:44.790407 1060798 main.go:141] libmachine: (embed-certs-553677) waiting for SSH...
I0120 14:00:44.794659 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:44.795095 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
I0120 14:00:44.795127 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:44.795243 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | Using SSH client type: external
I0120 14:00:44.795270 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | Using SSH private key: /home/jenkins/minikube-integration/20242-998973/.minikube/machines/embed-certs-553677/id_rsa (-rw-------)
I0120 14:00:44.795309 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20242-998973/.minikube/machines/embed-certs-553677/id_rsa -p 22] /usr/bin/ssh <nil>}
I0120 14:00:44.795325 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | About to run SSH command:
I0120 14:00:44.795362 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | exit 0
I0120 14:00:44.930778 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | SSH cmd err, output: <nil>:
I0120 14:00:44.931282 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetConfigRaw
I0120 14:00:44.932172 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetIP
I0120 14:00:44.935918 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:44.936516 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
I0120 14:00:44.936563 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:44.936656 1060798 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/embed-certs-553677/config.json ...
I0120 14:00:44.936929 1060798 machine.go:93] provisionDockerMachine start ...
I0120 14:00:44.936952 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .DriverName
I0120 14:00:44.937262 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHHostname
I0120 14:00:44.939866 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:44.940268 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
I0120 14:00:44.940317 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:44.940438 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHPort
I0120 14:00:44.940624 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
I0120 14:00:44.940796 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
I0120 14:00:44.940995 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHUsername
I0120 14:00:44.941199 1060798 main.go:141] libmachine: Using SSH client type: native
I0120 14:00:44.941385 1060798 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.72.136 22 <nil> <nil>}
I0120 14:00:44.941397 1060798 main.go:141] libmachine: About to run SSH command:
hostname
I0120 14:00:45.062836 1060798 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0120 14:00:45.062872 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetMachineName
I0120 14:00:45.063173 1060798 buildroot.go:166] provisioning hostname "embed-certs-553677"
I0120 14:00:45.063204 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetMachineName
I0120 14:00:45.063428 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHHostname
I0120 14:00:45.066583 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:45.067062 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
I0120 14:00:45.067089 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:45.067223 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHPort
I0120 14:00:45.067440 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
I0120 14:00:45.067602 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
I0120 14:00:45.067748 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHUsername
I0120 14:00:45.067976 1060798 main.go:141] libmachine: Using SSH client type: native
I0120 14:00:45.068183 1060798 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.72.136 22 <nil> <nil>}
I0120 14:00:45.068200 1060798 main.go:141] libmachine: About to run SSH command:
sudo hostname embed-certs-553677 && echo "embed-certs-553677" | sudo tee /etc/hostname
I0120 14:00:45.199195 1060798 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-553677
I0120 14:00:45.199230 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHHostname
I0120 14:00:45.202583 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:45.203009 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
I0120 14:00:45.203041 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:45.203214 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHPort
I0120 14:00:45.203458 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
I0120 14:00:45.203698 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
I0120 14:00:45.203867 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHUsername
I0120 14:00:45.204107 1060798 main.go:141] libmachine: Using SSH client type: native
I0120 14:00:45.204400 1060798 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.72.136 22 <nil> <nil>}
I0120 14:00:45.204433 1060798 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sembed-certs-553677' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-553677/g' /etc/hosts;
else
echo '127.0.1.1 embed-certs-553677' | sudo tee -a /etc/hosts;
fi
fi
I0120 14:00:45.326926 1060798 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0120 14:00:45.326969 1060798 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20242-998973/.minikube CaCertPath:/home/jenkins/minikube-integration/20242-998973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20242-998973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20242-998973/.minikube}
I0120 14:00:45.326996 1060798 buildroot.go:174] setting up certificates
I0120 14:00:45.327009 1060798 provision.go:84] configureAuth start
I0120 14:00:45.327023 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetMachineName
I0120 14:00:45.327381 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetIP
I0120 14:00:45.330599 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:45.331028 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
I0120 14:00:45.331067 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:45.331303 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHHostname
I0120 14:00:45.333924 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:45.334385 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
I0120 14:00:45.334439 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:45.334549 1060798 provision.go:143] copyHostCerts
I0120 14:00:45.334623 1060798 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-998973/.minikube/ca.pem, removing ...
I0120 14:00:45.334647 1060798 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-998973/.minikube/ca.pem
I0120 14:00:45.334718 1060798 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-998973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20242-998973/.minikube/ca.pem (1082 bytes)
I0120 14:00:45.334848 1060798 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-998973/.minikube/cert.pem, removing ...
I0120 14:00:45.334865 1060798 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-998973/.minikube/cert.pem
I0120 14:00:45.334896 1060798 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-998973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20242-998973/.minikube/cert.pem (1123 bytes)
I0120 14:00:45.334980 1060798 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-998973/.minikube/key.pem, removing ...
I0120 14:00:45.334991 1060798 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-998973/.minikube/key.pem
I0120 14:00:45.335017 1060798 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-998973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20242-998973/.minikube/key.pem (1675 bytes)
I0120 14:00:45.335085 1060798 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20242-998973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20242-998973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20242-998973/.minikube/certs/ca-key.pem org=jenkins.embed-certs-553677 san=[127.0.0.1 192.168.72.136 embed-certs-553677 localhost minikube]
I0120 14:00:45.559381 1060798 provision.go:177] copyRemoteCerts
I0120 14:00:45.559445 1060798 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0120 14:00:45.559475 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHHostname
I0120 14:00:45.562152 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:45.562469 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
I0120 14:00:45.562506 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:45.562677 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHPort
I0120 14:00:45.562897 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
I0120 14:00:45.563020 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHUsername
I0120 14:00:45.563240 1060798 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/embed-certs-553677/id_rsa Username:docker}
I0120 14:00:45.652039 1060798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0120 14:00:45.680567 1060798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
I0120 14:00:45.708749 1060798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0120 14:00:45.736457 1060798 provision.go:87] duration metric: took 409.40887ms to configureAuth
I0120 14:00:45.736502 1060798 buildroot.go:189] setting minikube options for container-runtime
I0120 14:00:45.736743 1060798 config.go:182] Loaded profile config "embed-certs-553677": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 14:00:45.736759 1060798 machine.go:96] duration metric: took 799.816175ms to provisionDockerMachine
I0120 14:00:45.736767 1060798 start.go:293] postStartSetup for "embed-certs-553677" (driver="kvm2")
I0120 14:00:45.736781 1060798 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0120 14:00:45.736824 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .DriverName
I0120 14:00:45.737243 1060798 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0120 14:00:45.737276 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHHostname
I0120 14:00:45.740300 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:45.740827 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
I0120 14:00:45.740864 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:45.741093 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHPort
I0120 14:00:45.741357 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
I0120 14:00:45.741522 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHUsername
I0120 14:00:45.741710 1060798 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/embed-certs-553677/id_rsa Username:docker}
I0120 14:00:45.828630 1060798 ssh_runner.go:195] Run: cat /etc/os-release
I0120 14:00:45.833818 1060798 info.go:137] Remote host: Buildroot 2023.02.9
I0120 14:00:45.833872 1060798 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-998973/.minikube/addons for local assets ...
I0120 14:00:45.833963 1060798 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-998973/.minikube/files for local assets ...
I0120 14:00:45.834099 1060798 filesync.go:149] local asset: /home/jenkins/minikube-integration/20242-998973/.minikube/files/etc/ssl/certs/10062632.pem -> 10062632.pem in /etc/ssl/certs
I0120 14:00:45.834268 1060798 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0120 14:00:45.845164 1060798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/files/etc/ssl/certs/10062632.pem --> /etc/ssl/certs/10062632.pem (1708 bytes)
I0120 14:00:45.876226 1060798 start.go:296] duration metric: took 139.437685ms for postStartSetup
I0120 14:00:45.876281 1060798 fix.go:56] duration metric: took 20.745757423s for fixHost
I0120 14:00:45.876315 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHHostname
I0120 14:00:45.879709 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:45.880097 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
I0120 14:00:45.880131 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:45.880347 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHPort
I0120 14:00:45.880589 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
I0120 14:00:45.880755 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
I0120 14:00:45.880989 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHUsername
I0120 14:00:45.881164 1060798 main.go:141] libmachine: Using SSH client type: native
I0120 14:00:45.881369 1060798 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.72.136 22 <nil> <nil>}
I0120 14:00:45.881385 1060798 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0120 14:00:45.994287 1060798 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737381645.965468301
I0120 14:00:45.994315 1060798 fix.go:216] guest clock: 1737381645.965468301
I0120 14:00:45.994326 1060798 fix.go:229] Guest: 2025-01-20 14:00:45.965468301 +0000 UTC Remote: 2025-01-20 14:00:45.876285295 +0000 UTC m=+36.780783009 (delta=89.183006ms)
I0120 14:00:45.994371 1060798 fix.go:200] guest clock delta is within tolerance: 89.183006ms
I0120 14:00:45.994379 1060798 start.go:83] releasing machines lock for "embed-certs-553677", held for 20.863898065s
I0120 14:00:45.994409 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .DriverName
I0120 14:00:45.994700 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetIP
I0120 14:00:45.997789 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:45.998225 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
I0120 14:00:45.998251 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:45.998493 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .DriverName
I0120 14:00:45.999097 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .DriverName
I0120 14:00:45.999284 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .DriverName
I0120 14:00:45.999347 1060798 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0120 14:00:45.999411 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHHostname
I0120 14:00:45.999587 1060798 ssh_runner.go:195] Run: cat /version.json
I0120 14:00:45.999630 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHHostname
I0120 14:00:46.002787 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:46.003148 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:46.003274 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
I0120 14:00:46.003302 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:46.003554 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
I0120 14:00:46.003577 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:46.003622 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHPort
I0120 14:00:46.003778 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHPort
I0120 14:00:46.003873 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
I0120 14:00:46.003989 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
I0120 14:00:46.004048 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHUsername
I0120 14:00:46.004280 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHUsername
I0120 14:00:46.004284 1060798 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/embed-certs-553677/id_rsa Username:docker}
I0120 14:00:46.004447 1060798 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/embed-certs-553677/id_rsa Username:docker}
I0120 14:00:46.087485 1060798 ssh_runner.go:195] Run: systemctl --version
I0120 14:00:46.115664 1060798 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0120 14:00:46.123518 1060798 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0120 14:00:46.123609 1060798 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0120 14:00:46.147126 1060798 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0120 14:00:46.147166 1060798 start.go:495] detecting cgroup driver to use...
I0120 14:00:46.147253 1060798 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0120 14:00:46.182494 1060798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0120 14:00:46.200915 1060798 docker.go:217] disabling cri-docker service (if available) ...
I0120 14:00:46.201014 1060798 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0120 14:00:46.218855 1060798 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0120 14:00:46.235015 1060798 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0120 14:00:46.368546 1060798 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0120 14:00:46.535139 1060798 docker.go:233] disabling docker service ...
I0120 14:00:46.535226 1060798 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0120 14:00:46.551928 1060798 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0120 14:00:46.569189 1060798 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0120 14:00:46.721501 1060798 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0120 14:00:46.870799 1060798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0120 14:00:46.888859 1060798 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0120 14:00:46.922800 1060798 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0120 14:00:46.935631 1060798 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0120 14:00:46.947299 1060798 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0120 14:00:46.947365 1060798 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0120 14:00:46.959181 1060798 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0120 14:00:46.971239 1060798 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0120 14:00:46.982931 1060798 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0120 14:00:46.994688 1060798 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0120 14:00:47.006568 1060798 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0120 14:00:47.018188 1060798 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0120 14:00:47.029555 1060798 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0120 14:00:47.042008 1060798 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0120 14:00:47.053847 1060798 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0120 14:00:47.053914 1060798 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0120 14:00:47.068557 1060798 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0120 14:00:47.079724 1060798 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0120 14:00:47.244050 1060798 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0120 14:00:47.286700 1060798 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0120 14:00:47.286783 1060798 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0120 14:00:47.293695 1060798 retry.go:31] will retry after 1.046860485s: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0120 14:00:48.340998 1060798 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0120 14:00:48.348295 1060798 start.go:563] Will wait 60s for crictl version
I0120 14:00:48.348362 1060798 ssh_runner.go:195] Run: which crictl
I0120 14:00:48.353005 1060798 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0120 14:00:48.401857 1060798 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.7.23
RuntimeApiVersion: v1
I0120 14:00:48.401945 1060798 ssh_runner.go:195] Run: containerd --version
I0120 14:00:48.436624 1060798 ssh_runner.go:195] Run: containerd --version
I0120 14:00:48.469764 1060798 out.go:177] * Preparing Kubernetes v1.32.0 on containerd 1.7.23 ...
I0120 14:00:48.471367 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetIP
I0120 14:00:48.474978 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:48.475421 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
I0120 14:00:48.475451 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:00:48.475767 1060798 ssh_runner.go:195] Run: grep 192.168.72.1 host.minikube.internal$ /etc/hosts
I0120 14:00:48.481387 1060798 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0120 14:00:48.496680 1060798 kubeadm.go:883] updating cluster {Name:embed-certs-553677 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-553677 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.136 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0120 14:00:48.496831 1060798 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
I0120 14:00:48.496943 1060798 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 14:00:48.543621 1060798 containerd.go:627] all images are preloaded for containerd runtime.
I0120 14:00:48.543650 1060798 containerd.go:534] Images already preloaded, skipping extraction
I0120 14:00:48.543720 1060798 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 14:00:48.583058 1060798 containerd.go:627] all images are preloaded for containerd runtime.
I0120 14:00:48.583091 1060798 cache_images.go:84] Images are preloaded, skipping loading
I0120 14:00:48.583102 1060798 kubeadm.go:934] updating node { 192.168.72.136 8443 v1.32.0 containerd true true} ...
I0120 14:00:48.583248 1060798 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-553677 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.136
[Install]
config:
{KubernetesVersion:v1.32.0 ClusterName:embed-certs-553677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0120 14:00:48.583324 1060798 ssh_runner.go:195] Run: sudo crictl info
I0120 14:00:48.626717 1060798 cni.go:84] Creating CNI manager for ""
I0120 14:00:48.626749 1060798 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0120 14:00:48.626763 1060798 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0120 14:00:48.626794 1060798 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.136 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-553677 NodeName:embed-certs-553677 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0120 14:00:48.626939 1060798 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.72.136
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "embed-certs-553677"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.72.136"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.72.136"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0120 14:00:48.627014 1060798 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
I0120 14:00:48.638594 1060798 binaries.go:44] Found k8s binaries, skipping transfer
I0120 14:00:48.638682 1060798 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0120 14:00:48.649443 1060798 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
I0120 14:00:48.672682 1060798 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0120 14:00:48.693688 1060798 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2314 bytes)
I0120 14:00:48.714789 1060798 ssh_runner.go:195] Run: grep 192.168.72.136 control-plane.minikube.internal$ /etc/hosts
I0120 14:00:48.719444 1060798 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.136 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0120 14:00:48.733671 1060798 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0120 14:00:48.868720 1060798 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0120 14:00:48.892448 1060798 certs.go:68] Setting up /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/embed-certs-553677 for IP: 192.168.72.136
I0120 14:00:48.892480 1060798 certs.go:194] generating shared ca certs ...
I0120 14:00:48.892506 1060798 certs.go:226] acquiring lock for ca certs: {Name:mk3b53704e4ec52de26582ed9269b5c3b0eb7914 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 14:00:48.892707 1060798 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20242-998973/.minikube/ca.key
I0120 14:00:48.892774 1060798 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20242-998973/.minikube/proxy-client-ca.key
I0120 14:00:48.892792 1060798 certs.go:256] generating profile certs ...
I0120 14:00:48.892917 1060798 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/embed-certs-553677/client.key
I0120 14:00:48.893048 1060798 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/embed-certs-553677/apiserver.key.4b39fe5c
I0120 14:00:48.893105 1060798 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/embed-certs-553677/proxy-client.key
I0120 14:00:48.893271 1060798 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-998973/.minikube/certs/1006263.pem (1338 bytes)
W0120 14:00:48.893313 1060798 certs.go:480] ignoring /home/jenkins/minikube-integration/20242-998973/.minikube/certs/1006263_empty.pem, impossibly tiny 0 bytes
I0120 14:00:48.893327 1060798 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-998973/.minikube/certs/ca-key.pem (1675 bytes)
I0120 14:00:48.893365 1060798 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-998973/.minikube/certs/ca.pem (1082 bytes)
I0120 14:00:48.893403 1060798 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-998973/.minikube/certs/cert.pem (1123 bytes)
I0120 14:00:48.893435 1060798 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-998973/.minikube/certs/key.pem (1675 bytes)
I0120 14:00:48.893489 1060798 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-998973/.minikube/files/etc/ssl/certs/10062632.pem (1708 bytes)
I0120 14:00:48.894289 1060798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0120 14:00:48.942535 1060798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0120 14:00:48.981045 1060798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0120 14:00:49.024866 1060798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0120 14:00:49.064664 1060798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/embed-certs-553677/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
I0120 14:00:49.111059 1060798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/embed-certs-553677/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0120 14:00:49.154084 1060798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/embed-certs-553677/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0120 14:00:49.196268 1060798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/embed-certs-553677/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0120 14:00:49.224461 1060798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/certs/1006263.pem --> /usr/share/ca-certificates/1006263.pem (1338 bytes)
I0120 14:00:49.257755 1060798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/files/etc/ssl/certs/10062632.pem --> /usr/share/ca-certificates/10062632.pem (1708 bytes)
I0120 14:00:49.291363 1060798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0120 14:00:49.325873 1060798 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0120 14:00:49.348619 1060798 ssh_runner.go:195] Run: openssl version
I0120 14:00:49.358463 1060798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10062632.pem && ln -fs /usr/share/ca-certificates/10062632.pem /etc/ssl/certs/10062632.pem"
I0120 14:00:49.373474 1060798 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10062632.pem
I0120 14:00:49.379380 1060798 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 12:56 /usr/share/ca-certificates/10062632.pem
I0120 14:00:49.379466 1060798 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10062632.pem
I0120 14:00:49.386420 1060798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10062632.pem /etc/ssl/certs/3ec20f2e.0"
I0120 14:00:49.400887 1060798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0120 14:00:49.416345 1060798 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0120 14:00:49.422379 1060798 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 12:48 /usr/share/ca-certificates/minikubeCA.pem
I0120 14:00:49.422463 1060798 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0120 14:00:49.431905 1060798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0120 14:00:49.446192 1060798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1006263.pem && ln -fs /usr/share/ca-certificates/1006263.pem /etc/ssl/certs/1006263.pem"
I0120 14:00:49.464845 1060798 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1006263.pem
I0120 14:00:49.470841 1060798 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 12:56 /usr/share/ca-certificates/1006263.pem
I0120 14:00:49.470936 1060798 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1006263.pem
I0120 14:00:49.477897 1060798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1006263.pem /etc/ssl/certs/51391683.0"
I0120 14:00:49.493285 1060798 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0120 14:00:49.499356 1060798 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0120 14:00:49.512066 1060798 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0120 14:00:49.520694 1060798 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0120 14:00:49.528307 1060798 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0120 14:00:49.537554 1060798 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0120 14:00:49.547409 1060798 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0120 14:00:49.554863 1060798 kubeadm.go:392] StartCluster: {Name:embed-certs-553677 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-553677 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.136 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0120 14:00:49.554982 1060798 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0120 14:00:49.555058 1060798 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0120 14:00:49.601305 1060798 cri.go:89] found id: "ba4e0b7070c1f21b2fb5b10c65d9fa8449c0cfbb3609dfa150cc3e75c24b3a95"
I0120 14:00:49.601340 1060798 cri.go:89] found id: "8849248bcbccaaa5666cf1b10a904de017968aafdc8dee3b4c1e513e9406d17e"
I0120 14:00:49.601346 1060798 cri.go:89] found id: "c20ea93f59ac701e4df9672275e0ff67e1b14867b2327aa7b1c73eca3b8d6a88"
I0120 14:00:49.601352 1060798 cri.go:89] found id: "773b7e54100723de0d144e1f855bb6abccfe49ac3af85db764979610dd8a7768"
I0120 14:00:49.601356 1060798 cri.go:89] found id: "b124c5bdd444435d1aca8531c3ad4c61dca0e2f7a57508c1ed4cbda0226873c8"
I0120 14:00:49.601361 1060798 cri.go:89] found id: "67b8ff6e2106fb5e4450bf23978ca6b657d1642f27d072d2973a89e9898387cd"
I0120 14:00:49.601365 1060798 cri.go:89] found id: "6cd1df2057537d91ffd5cb9fe006440e912e0aa3015e8fdbb286736a7d4741fa"
I0120 14:00:49.601370 1060798 cri.go:89] found id: "43e4658ac5fadb19e6506e7f595a4ed0c8bea9c2fa098cfca02d0700f6a77d23"
I0120 14:00:49.601373 1060798 cri.go:89] found id: ""
I0120 14:00:49.601430 1060798 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
W0120 14:00:49.618071 1060798 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-01-20T14:00:49Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I0120 14:00:49.618176 1060798 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0120 14:00:49.631147 1060798 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0120 14:00:49.631234 1060798 kubeadm.go:593] restartPrimaryControlPlane start ...
I0120 14:00:49.631307 1060798 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0120 14:00:49.642306 1060798 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0120 14:00:49.643040 1060798 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-553677" does not appear in /home/jenkins/minikube-integration/20242-998973/kubeconfig
I0120 14:00:49.643304 1060798 kubeconfig.go:62] /home/jenkins/minikube-integration/20242-998973/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-553677" cluster setting kubeconfig missing "embed-certs-553677" context setting]
I0120 14:00:49.643720 1060798 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-998973/kubeconfig: {Name:mkc416e4f6e76f39025eb204e9812d9900c83215 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 14:00:49.645261 1060798 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0120 14:00:49.657306 1060798 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.136
I0120 14:00:49.657348 1060798 kubeadm.go:1160] stopping kube-system containers ...
I0120 14:00:49.657367 1060798 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I0120 14:00:49.657431 1060798 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0120 14:00:49.702230 1060798 cri.go:89] found id: "ba4e0b7070c1f21b2fb5b10c65d9fa8449c0cfbb3609dfa150cc3e75c24b3a95"
I0120 14:00:49.702257 1060798 cri.go:89] found id: "8849248bcbccaaa5666cf1b10a904de017968aafdc8dee3b4c1e513e9406d17e"
I0120 14:00:49.702260 1060798 cri.go:89] found id: "c20ea93f59ac701e4df9672275e0ff67e1b14867b2327aa7b1c73eca3b8d6a88"
I0120 14:00:49.702264 1060798 cri.go:89] found id: "773b7e54100723de0d144e1f855bb6abccfe49ac3af85db764979610dd8a7768"
I0120 14:00:49.702267 1060798 cri.go:89] found id: "b124c5bdd444435d1aca8531c3ad4c61dca0e2f7a57508c1ed4cbda0226873c8"
I0120 14:00:49.702270 1060798 cri.go:89] found id: "67b8ff6e2106fb5e4450bf23978ca6b657d1642f27d072d2973a89e9898387cd"
I0120 14:00:49.702272 1060798 cri.go:89] found id: "6cd1df2057537d91ffd5cb9fe006440e912e0aa3015e8fdbb286736a7d4741fa"
I0120 14:00:49.702274 1060798 cri.go:89] found id: "43e4658ac5fadb19e6506e7f595a4ed0c8bea9c2fa098cfca02d0700f6a77d23"
I0120 14:00:49.702277 1060798 cri.go:89] found id: ""
I0120 14:00:49.702283 1060798 cri.go:252] Stopping containers: [ba4e0b7070c1f21b2fb5b10c65d9fa8449c0cfbb3609dfa150cc3e75c24b3a95 8849248bcbccaaa5666cf1b10a904de017968aafdc8dee3b4c1e513e9406d17e c20ea93f59ac701e4df9672275e0ff67e1b14867b2327aa7b1c73eca3b8d6a88 773b7e54100723de0d144e1f855bb6abccfe49ac3af85db764979610dd8a7768 b124c5bdd444435d1aca8531c3ad4c61dca0e2f7a57508c1ed4cbda0226873c8 67b8ff6e2106fb5e4450bf23978ca6b657d1642f27d072d2973a89e9898387cd 6cd1df2057537d91ffd5cb9fe006440e912e0aa3015e8fdbb286736a7d4741fa 43e4658ac5fadb19e6506e7f595a4ed0c8bea9c2fa098cfca02d0700f6a77d23]
I0120 14:00:49.702351 1060798 ssh_runner.go:195] Run: which crictl
I0120 14:00:49.707421 1060798 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 ba4e0b7070c1f21b2fb5b10c65d9fa8449c0cfbb3609dfa150cc3e75c24b3a95 8849248bcbccaaa5666cf1b10a904de017968aafdc8dee3b4c1e513e9406d17e c20ea93f59ac701e4df9672275e0ff67e1b14867b2327aa7b1c73eca3b8d6a88 773b7e54100723de0d144e1f855bb6abccfe49ac3af85db764979610dd8a7768 b124c5bdd444435d1aca8531c3ad4c61dca0e2f7a57508c1ed4cbda0226873c8 67b8ff6e2106fb5e4450bf23978ca6b657d1642f27d072d2973a89e9898387cd 6cd1df2057537d91ffd5cb9fe006440e912e0aa3015e8fdbb286736a7d4741fa 43e4658ac5fadb19e6506e7f595a4ed0c8bea9c2fa098cfca02d0700f6a77d23
I0120 14:00:49.757026 1060798 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0120 14:00:49.776829 1060798 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0120 14:00:49.790392 1060798 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0120 14:00:49.790434 1060798 kubeadm.go:157] found existing configuration files:
I0120 14:00:49.790525 1060798 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0120 14:00:49.802002 1060798 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0120 14:00:49.802105 1060798 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0120 14:00:49.813781 1060798 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0120 14:00:49.828281 1060798 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0120 14:00:49.828375 1060798 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0120 14:00:49.843993 1060798 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0120 14:00:49.858174 1060798 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0120 14:00:49.858259 1060798 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0120 14:00:49.870757 1060798 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0120 14:00:49.882769 1060798 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0120 14:00:49.882867 1060798 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0120 14:00:49.895507 1060798 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0120 14:00:49.908298 1060798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0120 14:00:50.083446 1060798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0120 14:00:50.896086 1060798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0120 14:00:51.147259 1060798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0120 14:00:51.224080 1060798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0120 14:00:51.332246 1060798 api_server.go:52] waiting for apiserver process to appear ...
I0120 14:00:51.332382 1060798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0120 14:00:51.832815 1060798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0120 14:00:52.332689 1060798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0120 14:00:52.833065 1060798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0120 14:00:52.861484 1060798 api_server.go:72] duration metric: took 1.52923944s to wait for apiserver process to appear ...
I0120 14:00:52.861523 1060798 api_server.go:88] waiting for apiserver healthz status ...
I0120 14:00:52.861555 1060798 api_server.go:253] Checking apiserver healthz at https://192.168.72.136:8443/healthz ...
I0120 14:00:52.862202 1060798 api_server.go:269] stopped: https://192.168.72.136:8443/healthz: Get "https://192.168.72.136:8443/healthz": dial tcp 192.168.72.136:8443: connect: connection refused
I0120 14:00:53.361875 1060798 api_server.go:253] Checking apiserver healthz at https://192.168.72.136:8443/healthz ...
I0120 14:00:55.730960 1060798 api_server.go:279] https://192.168.72.136:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0120 14:00:55.730999 1060798 api_server.go:103] status: https://192.168.72.136:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0120 14:00:55.731015 1060798 api_server.go:253] Checking apiserver healthz at https://192.168.72.136:8443/healthz ...
I0120 14:00:55.749785 1060798 api_server.go:279] https://192.168.72.136:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0120 14:00:55.749821 1060798 api_server.go:103] status: https://192.168.72.136:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0120 14:00:55.862208 1060798 api_server.go:253] Checking apiserver healthz at https://192.168.72.136:8443/healthz ...
I0120 14:00:55.915710 1060798 api_server.go:279] https://192.168.72.136:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0120 14:00:55.915742 1060798 api_server.go:103] status: https://192.168.72.136:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0120 14:00:56.362222 1060798 api_server.go:253] Checking apiserver healthz at https://192.168.72.136:8443/healthz ...
I0120 14:00:56.388494 1060798 api_server.go:279] https://192.168.72.136:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0120 14:00:56.388539 1060798 api_server.go:103] status: https://192.168.72.136:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0120 14:00:56.862160 1060798 api_server.go:253] Checking apiserver healthz at https://192.168.72.136:8443/healthz ...
I0120 14:00:56.870469 1060798 api_server.go:279] https://192.168.72.136:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0120 14:00:56.870580 1060798 api_server.go:103] status: https://192.168.72.136:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0120 14:00:57.362195 1060798 api_server.go:253] Checking apiserver healthz at https://192.168.72.136:8443/healthz ...
I0120 14:00:57.381451 1060798 api_server.go:279] https://192.168.72.136:8443/healthz returned 200:
ok
I0120 14:00:57.395915 1060798 api_server.go:141] control plane version: v1.32.0
I0120 14:00:57.395970 1060798 api_server.go:131] duration metric: took 4.534437824s to wait for apiserver health ...
I0120 14:00:57.396008 1060798 cni.go:84] Creating CNI manager for ""
I0120 14:00:57.396022 1060798 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0120 14:00:57.397786 1060798 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0120 14:00:57.399309 1060798 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0120 14:00:57.420248 1060798 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0120 14:00:57.452035 1060798 system_pods.go:43] waiting for kube-system pods to appear ...
I0120 14:00:57.468438 1060798 system_pods.go:59] 8 kube-system pods found
I0120 14:00:57.468489 1060798 system_pods.go:61] "coredns-668d6bf9bc-97dc2" [c98d0167-7d4e-43f0-be8d-dc702847de79] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0120 14:00:57.468504 1060798 system_pods.go:61] "etcd-embed-certs-553677" [640370fc-478b-4dd1-b546-634a1077cf6f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0120 14:00:57.468514 1060798 system_pods.go:61] "kube-apiserver-embed-certs-553677" [6d0da8ff-1d58-4b5b-88bb-8fa374a996a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0120 14:00:57.468521 1060798 system_pods.go:61] "kube-controller-manager-embed-certs-553677" [d415449a-97cd-4663-8351-90dd1820cbfc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0120 14:00:57.468529 1060798 system_pods.go:61] "kube-proxy-rs2x7" [23dba39c-292b-4df7-8d84-adf6233df385] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0120 14:00:57.468537 1060798 system_pods.go:61] "kube-scheduler-embed-certs-553677" [9e13df4f-f97d-4049-b460-bbf09bcaee47] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0120 14:00:57.468593 1060798 system_pods.go:61] "metrics-server-f79f97bbb-5mwxz" [c190f5c5-67c1-4175-8677-62f6465c91da] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0120 14:00:57.468604 1060798 system_pods.go:61] "storage-provisioner" [0588ceec-e063-45d6-9442-16c4d66afad3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0120 14:00:57.468612 1060798 system_pods.go:74] duration metric: took 16.547569ms to wait for pod list to return data ...
I0120 14:00:57.468620 1060798 node_conditions.go:102] verifying NodePressure condition ...
I0120 14:00:57.474898 1060798 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0120 14:00:57.474951 1060798 node_conditions.go:123] node cpu capacity is 2
I0120 14:00:57.474963 1060798 node_conditions.go:105] duration metric: took 6.338427ms to run NodePressure ...
I0120 14:00:57.474990 1060798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0120 14:00:57.856387 1060798 kubeadm.go:724] waiting for restarted kubelet to initialise ...
I0120 14:00:57.864423 1060798 kubeadm.go:739] kubelet initialised
I0120 14:00:57.864453 1060798 kubeadm.go:740] duration metric: took 8.036091ms waiting for restarted kubelet to initialise ...
I0120 14:00:57.864465 1060798 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0120 14:00:57.872764 1060798 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-97dc2" in "kube-system" namespace to be "Ready" ...
I0120 14:00:59.882354 1060798 pod_ready.go:103] pod "coredns-668d6bf9bc-97dc2" in "kube-system" namespace has status "Ready":"False"
I0120 14:01:01.885356 1060798 pod_ready.go:103] pod "coredns-668d6bf9bc-97dc2" in "kube-system" namespace has status "Ready":"False"
I0120 14:01:03.881069 1060798 pod_ready.go:93] pod "coredns-668d6bf9bc-97dc2" in "kube-system" namespace has status "Ready":"True"
I0120 14:01:03.881099 1060798 pod_ready.go:82] duration metric: took 6.008294892s for pod "coredns-668d6bf9bc-97dc2" in "kube-system" namespace to be "Ready" ...
I0120 14:01:03.881110 1060798 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
I0120 14:01:05.388296 1060798 pod_ready.go:93] pod "etcd-embed-certs-553677" in "kube-system" namespace has status "Ready":"True"
I0120 14:01:05.388326 1060798 pod_ready.go:82] duration metric: took 1.507208465s for pod "etcd-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
I0120 14:01:05.388339 1060798 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
I0120 14:01:07.395728 1060798 pod_ready.go:103] pod "kube-apiserver-embed-certs-553677" in "kube-system" namespace has status "Ready":"False"
I0120 14:01:09.396354 1060798 pod_ready.go:103] pod "kube-apiserver-embed-certs-553677" in "kube-system" namespace has status "Ready":"False"
I0120 14:01:10.897222 1060798 pod_ready.go:93] pod "kube-apiserver-embed-certs-553677" in "kube-system" namespace has status "Ready":"True"
I0120 14:01:10.897247 1060798 pod_ready.go:82] duration metric: took 5.508900417s for pod "kube-apiserver-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
I0120 14:01:10.897258 1060798 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
I0120 14:01:10.903704 1060798 pod_ready.go:93] pod "kube-controller-manager-embed-certs-553677" in "kube-system" namespace has status "Ready":"True"
I0120 14:01:10.903737 1060798 pod_ready.go:82] duration metric: took 6.470015ms for pod "kube-controller-manager-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
I0120 14:01:10.903752 1060798 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rs2x7" in "kube-system" namespace to be "Ready" ...
I0120 14:01:10.910715 1060798 pod_ready.go:93] pod "kube-proxy-rs2x7" in "kube-system" namespace has status "Ready":"True"
I0120 14:01:10.910750 1060798 pod_ready.go:82] duration metric: took 6.988172ms for pod "kube-proxy-rs2x7" in "kube-system" namespace to be "Ready" ...
I0120 14:01:10.910763 1060798 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
I0120 14:01:10.917871 1060798 pod_ready.go:93] pod "kube-scheduler-embed-certs-553677" in "kube-system" namespace has status "Ready":"True"
I0120 14:01:10.917900 1060798 pod_ready.go:82] duration metric: took 7.129507ms for pod "kube-scheduler-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
I0120 14:01:10.917910 1060798 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace to be "Ready" ...
I0120 14:01:12.929849 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:01:15.427535 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:01:17.925661 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:01:19.926557 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:01:22.425890 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:01:24.926990 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:01:27.427353 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:01:29.927746 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:01:32.427139 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:01:34.929766 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:01:37.427460 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:01:39.926373 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:01:42.427255 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:01:44.428207 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:01:46.924388 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:01:48.926980 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:01:51.426188 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:01:53.926309 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:01:55.928068 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:01:58.425453 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:02:00.425835 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:02:02.552868 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:02:04.925300 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:02:06.926078 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:02:09.428390 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:02:11.428886 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:02:13.925379 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:02:16.425544 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:02:18.425726 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:02:20.924433 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:02:23.425780 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:02:25.924945 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:02:27.925705 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:02:30.431285 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:02:32.924795 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:02:34.926051 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:02:36.926685 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:02:39.425121 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:02:41.925316 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:02:43.925693 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:02:46.425212 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:02:48.425692 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:02:50.924586 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:02:53.425566 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:02:55.425685 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:02:57.926013 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:03:00.425153 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:03:02.924559 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:03:04.930297 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:03:07.426640 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:03:09.925548 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:03:12.424748 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:03:14.426806 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:03:16.923938 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:03:18.925195 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:03:20.925946 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:03:23.425061 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:03:25.925382 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:03:27.925943 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:03:30.424777 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:03:32.425034 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:03:34.426763 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:03:36.925094 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:03:39.424843 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:03:41.425799 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:03:43.925510 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:03:46.426472 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:03:48.427287 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:03:50.927809 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:03:53.425189 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:03:55.428748 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:03:57.926914 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:00.426009 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:02.924936 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:04.927250 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:07.423593 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:09.425157 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:11.425719 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:13.925414 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:16.426754 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:18.925095 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:20.926956 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:23.425946 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:25.927641 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:28.425557 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:30.426101 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:32.426240 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:34.426618 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:36.427081 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:38.926097 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:41.424924 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:43.425336 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:45.425579 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:47.926756 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:50.427277 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:52.925532 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:54.926430 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:57.426323 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:59.926968 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:02.425700 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:04.925140 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:06.925415 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:08.925905 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:10.918089 1060798 pod_ready.go:82] duration metric: took 4m0.000161453s for pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace to be "Ready" ...
E0120 14:05:10.918131 1060798 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace to be "Ready" (will not retry!)
I0120 14:05:10.918160 1060798 pod_ready.go:39] duration metric: took 4m13.053682746s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0120 14:05:10.918201 1060798 kubeadm.go:597] duration metric: took 4m21.286948978s to restartPrimaryControlPlane
W0120 14:05:10.918306 1060798 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
! Unable to restart control-plane node(s), will reset cluster: <no value>
I0120 14:05:10.918352 1060798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0120 14:05:12.920615 1060798 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.002231911s)
I0120 14:05:12.920701 1060798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0120 14:05:12.942116 1060798 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0120 14:05:12.954775 1060798 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0120 14:05:12.966775 1060798 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0120 14:05:12.966807 1060798 kubeadm.go:157] found existing configuration files:
I0120 14:05:12.966883 1060798 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0120 14:05:12.977602 1060798 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0120 14:05:12.977684 1060798 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0120 14:05:12.989019 1060798 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0120 14:05:13.000820 1060798 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0120 14:05:13.000898 1060798 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0120 14:05:13.016644 1060798 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0120 14:05:13.031439 1060798 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0120 14:05:13.031528 1060798 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0120 14:05:13.042457 1060798 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0120 14:05:13.055593 1060798 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0120 14:05:13.055669 1060798 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0120 14:05:13.068674 1060798 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0120 14:05:13.130131 1060798 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
I0120 14:05:13.130201 1060798 kubeadm.go:310] [preflight] Running pre-flight checks
I0120 14:05:13.252056 1060798 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0120 14:05:13.252208 1060798 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0120 14:05:13.252350 1060798 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0120 14:05:13.262351 1060798 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0120 14:05:13.264231 1060798 out.go:235] - Generating certificates and keys ...
I0120 14:05:13.264325 1060798 kubeadm.go:310] [certs] Using existing ca certificate authority
I0120 14:05:13.264382 1060798 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0120 14:05:13.264450 1060798 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0120 14:05:13.264503 1060798 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
I0120 14:05:13.264566 1060798 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
I0120 14:05:13.264617 1060798 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
I0120 14:05:13.264693 1060798 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
I0120 14:05:13.264816 1060798 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
I0120 14:05:13.264980 1060798 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0120 14:05:13.265097 1060798 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0120 14:05:13.265160 1060798 kubeadm.go:310] [certs] Using the existing "sa" key
I0120 14:05:13.265250 1060798 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0120 14:05:13.376018 1060798 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0120 14:05:13.789822 1060798 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0120 14:05:13.884391 1060798 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0120 14:05:14.207456 1060798 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0120 14:05:14.442708 1060798 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0120 14:05:14.443884 1060798 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0120 14:05:14.447802 1060798 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0120 14:05:14.449454 1060798 out.go:235] - Booting up control plane ...
I0120 14:05:14.449591 1060798 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0120 14:05:14.449723 1060798 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0120 14:05:14.450498 1060798 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0120 14:05:14.474336 1060798 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0120 14:05:14.486142 1060798 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0120 14:05:14.486368 1060798 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0120 14:05:14.656630 1060798 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0120 14:05:14.656842 1060798 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0120 14:05:15.658053 1060798 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001688461s
I0120 14:05:15.658185 1060798 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0120 14:05:21.661193 1060798 kubeadm.go:310] [api-check] The API server is healthy after 6.00301289s
I0120 14:05:21.679639 1060798 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0120 14:05:21.697225 1060798 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0120 14:05:21.729640 1060798 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0120 14:05:21.730176 1060798 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-553677 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0120 14:05:21.743570 1060798 kubeadm.go:310] [bootstrap-token] Using token: qgu27t.iap2ani2n2k7zkjw
I0120 14:05:21.745349 1060798 out.go:235] - Configuring RBAC rules ...
I0120 14:05:21.745503 1060798 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0120 14:05:21.754153 1060798 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0120 14:05:21.765952 1060798 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0120 14:05:21.771799 1060798 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0120 14:05:21.779054 1060798 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0120 14:05:21.785557 1060798 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0120 14:05:22.071797 1060798 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0120 14:05:22.539495 1060798 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0120 14:05:23.070019 1060798 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0120 14:05:23.071157 1060798 kubeadm.go:310]
I0120 14:05:23.071304 1060798 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0120 14:05:23.071330 1060798 kubeadm.go:310]
I0120 14:05:23.071427 1060798 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0120 14:05:23.071438 1060798 kubeadm.go:310]
I0120 14:05:23.071470 1060798 kubeadm.go:310] mkdir -p $HOME/.kube
I0120 14:05:23.071548 1060798 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0120 14:05:23.071621 1060798 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0120 14:05:23.071631 1060798 kubeadm.go:310]
I0120 14:05:23.071735 1060798 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0120 14:05:23.071777 1060798 kubeadm.go:310]
I0120 14:05:23.071865 1060798 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0120 14:05:23.071878 1060798 kubeadm.go:310]
I0120 14:05:23.071948 1060798 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0120 14:05:23.072051 1060798 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0120 14:05:23.072144 1060798 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0120 14:05:23.072164 1060798 kubeadm.go:310]
I0120 14:05:23.072309 1060798 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0120 14:05:23.072412 1060798 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0120 14:05:23.072423 1060798 kubeadm.go:310]
I0120 14:05:23.072537 1060798 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qgu27t.iap2ani2n2k7zkjw \
I0120 14:05:23.072690 1060798 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:6117bbf309b9c45faa7e855ae242c4a905187b8a6090715b408f9a384f87e114 \
I0120 14:05:23.072722 1060798 kubeadm.go:310] --control-plane
I0120 14:05:23.072736 1060798 kubeadm.go:310]
I0120 14:05:23.072848 1060798 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0120 14:05:23.072867 1060798 kubeadm.go:310]
I0120 14:05:23.072985 1060798 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qgu27t.iap2ani2n2k7zkjw \
I0120 14:05:23.073167 1060798 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:6117bbf309b9c45faa7e855ae242c4a905187b8a6090715b408f9a384f87e114
I0120 14:05:23.075375 1060798 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0120 14:05:23.075417 1060798 cni.go:84] Creating CNI manager for ""
I0120 14:05:23.075445 1060798 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0120 14:05:23.077601 1060798 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0120 14:05:23.079121 1060798 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0120 14:05:23.091937 1060798 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0120 14:05:23.116874 1060798 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0120 14:05:23.116939 1060798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 14:05:23.116978 1060798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-553677 minikube.k8s.io/updated_at=2025_01_20T14_05_23_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f660fd437a405b9b88cc818704e12bd22ce270c3 minikube.k8s.io/name=embed-certs-553677 minikube.k8s.io/primary=true
I0120 14:05:23.148895 1060798 ops.go:34] apiserver oom_adj: -16
I0120 14:05:23.378558 1060798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 14:05:23.879347 1060798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 14:05:24.379349 1060798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 14:05:24.879187 1060798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 14:05:25.379285 1060798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 14:05:25.879105 1060798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 14:05:26.379133 1060798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 14:05:26.478857 1060798 kubeadm.go:1113] duration metric: took 3.36197683s to wait for elevateKubeSystemPrivileges
I0120 14:05:26.478907 1060798 kubeadm.go:394] duration metric: took 4m36.924060891s to StartCluster
I0120 14:05:26.478935 1060798 settings.go:142] acquiring lock: {Name:mked7f2376b8a06c64dcfd911ab4b0d95ecdbe2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 14:05:26.479036 1060798 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20242-998973/kubeconfig
I0120 14:05:26.481214 1060798 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-998973/kubeconfig: {Name:mkc416e4f6e76f39025eb204e9812d9900c83215 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 14:05:26.481626 1060798 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.136 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0120 14:05:26.481760 1060798 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0120 14:05:26.481876 1060798 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-553677"
I0120 14:05:26.481896 1060798 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-553677"
W0120 14:05:26.481905 1060798 addons.go:247] addon storage-provisioner should already be in state true
I0120 14:05:26.481906 1060798 config.go:182] Loaded profile config "embed-certs-553677": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 14:05:26.481916 1060798 addons.go:69] Setting default-storageclass=true in profile "embed-certs-553677"
I0120 14:05:26.481942 1060798 addons.go:69] Setting metrics-server=true in profile "embed-certs-553677"
I0120 14:05:26.481958 1060798 addons.go:238] Setting addon metrics-server=true in "embed-certs-553677"
W0120 14:05:26.481970 1060798 addons.go:247] addon metrics-server should already be in state true
I0120 14:05:26.481989 1060798 host.go:66] Checking if "embed-certs-553677" exists ...
I0120 14:05:26.481957 1060798 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-553677"
I0120 14:05:26.481936 1060798 host.go:66] Checking if "embed-certs-553677" exists ...
I0120 14:05:26.482431 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 14:05:26.482468 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 14:05:26.481939 1060798 addons.go:69] Setting dashboard=true in profile "embed-certs-553677"
I0120 14:05:26.482542 1060798 addons.go:238] Setting addon dashboard=true in "embed-certs-553677"
W0120 14:05:26.482554 1060798 addons.go:247] addon dashboard should already be in state true
I0120 14:05:26.482556 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 14:05:26.482578 1060798 host.go:66] Checking if "embed-certs-553677" exists ...
I0120 14:05:26.482592 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 14:05:26.482543 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 14:05:26.482710 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 14:05:26.482972 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 14:05:26.483025 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 14:05:26.483426 1060798 out.go:177] * Verifying Kubernetes components...
I0120 14:05:26.485000 1060798 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0120 14:05:26.503670 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35691
I0120 14:05:26.503915 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35367
I0120 14:05:26.503956 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44967
I0120 14:05:26.504290 1060798 main.go:141] libmachine: () Calling .GetVersion
I0120 14:05:26.504434 1060798 main.go:141] libmachine: () Calling .GetVersion
I0120 14:05:26.505146 1060798 main.go:141] libmachine: Using API Version 1
I0120 14:05:26.505154 1060798 main.go:141] libmachine: Using API Version 1
I0120 14:05:26.505171 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 14:05:26.505175 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 14:05:26.505608 1060798 main.go:141] libmachine: () Calling .GetMachineName
I0120 14:05:26.505613 1060798 main.go:141] libmachine: () Calling .GetMachineName
I0120 14:05:26.505894 1060798 main.go:141] libmachine: () Calling .GetVersion
I0120 14:05:26.506345 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 14:05:26.506391 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 14:05:26.506479 1060798 main.go:141] libmachine: Using API Version 1
I0120 14:05:26.506502 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 14:05:26.506645 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 14:05:26.506751 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 14:05:26.507010 1060798 main.go:141] libmachine: () Calling .GetMachineName
I0120 14:05:26.507160 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36779
I0120 14:05:26.507428 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetState
I0120 14:05:26.507754 1060798 main.go:141] libmachine: () Calling .GetVersion
I0120 14:05:26.508311 1060798 main.go:141] libmachine: Using API Version 1
I0120 14:05:26.508336 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 14:05:26.508797 1060798 main.go:141] libmachine: () Calling .GetMachineName
I0120 14:05:26.509512 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 14:05:26.509563 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 14:05:26.512304 1060798 addons.go:238] Setting addon default-storageclass=true in "embed-certs-553677"
W0120 14:05:26.512327 1060798 addons.go:247] addon default-storageclass should already be in state true
I0120 14:05:26.512357 1060798 host.go:66] Checking if "embed-certs-553677" exists ...
I0120 14:05:26.512623 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 14:05:26.512672 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 14:05:26.529326 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45833
I0120 14:05:26.530030 1060798 main.go:141] libmachine: () Calling .GetVersion
I0120 14:05:26.530626 1060798 main.go:141] libmachine: Using API Version 1
I0120 14:05:26.530648 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 14:05:26.530699 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35851
I0120 14:05:26.530970 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44857
I0120 14:05:26.531055 1060798 main.go:141] libmachine: () Calling .GetVersion
I0120 14:05:26.531380 1060798 main.go:141] libmachine: () Calling .GetMachineName
I0120 14:05:26.531456 1060798 main.go:141] libmachine: () Calling .GetVersion
I0120 14:05:26.531589 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetState
I0120 14:05:26.531641 1060798 main.go:141] libmachine: Using API Version 1
I0120 14:05:26.531661 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 14:05:26.532129 1060798 main.go:141] libmachine: Using API Version 1
I0120 14:05:26.532156 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 14:05:26.532234 1060798 main.go:141] libmachine: () Calling .GetMachineName
I0120 14:05:26.532425 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetState
I0120 14:05:26.532428 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36165
I0120 14:05:26.532828 1060798 main.go:141] libmachine: () Calling .GetVersion
I0120 14:05:26.532931 1060798 main.go:141] libmachine: () Calling .GetMachineName
I0120 14:05:26.533311 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetState
I0120 14:05:26.535196 1060798 main.go:141] libmachine: Using API Version 1
I0120 14:05:26.535230 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 14:05:26.535639 1060798 main.go:141] libmachine: () Calling .GetMachineName
I0120 14:05:26.536245 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 14:05:26.536293 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 14:05:26.537777 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .DriverName
I0120 14:05:26.538423 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .DriverName
I0120 14:05:26.538544 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .DriverName
I0120 14:05:26.540631 1060798 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0120 14:05:26.540639 1060798 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0120 14:05:26.540707 1060798 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0120 14:05:26.541975 1060798 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0120 14:05:26.541997 1060798 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0120 14:05:26.542019 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHHostname
I0120 14:05:26.542075 1060798 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0120 14:05:26.542094 1060798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0120 14:05:26.542115 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHHostname
I0120 14:05:26.544926 1060798 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0120 14:05:26.546368 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0120 14:05:26.546392 1060798 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0120 14:05:26.546418 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHHostname
I0120 14:05:26.549578 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:05:26.549713 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:05:26.553664 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
I0120 14:05:26.553690 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:05:26.553947 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHPort
I0120 14:05:26.554117 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
I0120 14:05:26.554221 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHUsername
I0120 14:05:26.554305 1060798 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/embed-certs-553677/id_rsa Username:docker}
I0120 14:05:26.554626 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:05:26.554889 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
I0120 14:05:26.554914 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:05:26.555102 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHPort
I0120 14:05:26.555168 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
I0120 14:05:26.555182 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:05:26.555284 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
I0120 14:05:26.555340 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHPort
I0120 14:05:26.555596 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHUsername
I0120 14:05:26.555691 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
I0120 14:05:26.555715 1060798 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/embed-certs-553677/id_rsa Username:docker}
I0120 14:05:26.555883 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHUsername
I0120 14:05:26.556015 1060798 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/embed-certs-553677/id_rsa Username:docker}
I0120 14:05:26.560724 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34799
I0120 14:05:26.561235 1060798 main.go:141] libmachine: () Calling .GetVersion
I0120 14:05:26.561723 1060798 main.go:141] libmachine: Using API Version 1
I0120 14:05:26.561738 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 14:05:26.562059 1060798 main.go:141] libmachine: () Calling .GetMachineName
I0120 14:05:26.562297 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetState
I0120 14:05:26.564026 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .DriverName
I0120 14:05:26.564278 1060798 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0120 14:05:26.564290 1060798 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0120 14:05:26.564304 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHHostname
I0120 14:05:26.567858 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:05:26.568393 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
I0120 14:05:26.568433 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:05:26.568556 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHPort
I0120 14:05:26.568742 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
I0120 14:05:26.568910 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHUsername
I0120 14:05:26.569124 1060798 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/embed-certs-553677/id_rsa Username:docker}
I0120 14:05:26.773077 1060798 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0120 14:05:26.800362 1060798 node_ready.go:35] waiting up to 6m0s for node "embed-certs-553677" to be "Ready" ...
I0120 14:05:26.843740 1060798 node_ready.go:49] node "embed-certs-553677" has status "Ready":"True"
I0120 14:05:26.843780 1060798 node_ready.go:38] duration metric: took 43.372924ms for node "embed-certs-553677" to be "Ready" ...
I0120 14:05:26.843796 1060798 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0120 14:05:26.873119 1060798 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0120 14:05:26.873149 1060798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0120 14:05:26.874981 1060798 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
I0120 14:05:26.906789 1060798 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0120 14:05:26.940145 1060798 pod_ready.go:93] pod "etcd-embed-certs-553677" in "kube-system" namespace has status "Ready":"True"
I0120 14:05:26.940190 1060798 pod_ready.go:82] duration metric: took 65.181123ms for pod "etcd-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
I0120 14:05:26.940211 1060798 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
I0120 14:05:26.969325 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0120 14:05:26.969365 1060798 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0120 14:05:26.969405 1060798 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0120 14:05:26.989583 1060798 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0120 14:05:26.989615 1060798 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0120 14:05:27.153235 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0120 14:05:27.153271 1060798 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0120 14:05:27.177818 1060798 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0120 14:05:27.177844 1060798 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0120 14:05:27.342345 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0120 14:05:27.342379 1060798 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0120 14:05:27.474579 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0120 14:05:27.474615 1060798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0120 14:05:27.480859 1060798 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0120 14:05:27.583861 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0120 14:05:27.583897 1060798 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0120 14:05:27.625368 1060798 main.go:141] libmachine: Making call to close driver server
I0120 14:05:27.625405 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .Close
I0120 14:05:27.625755 1060798 main.go:141] libmachine: Successfully made call to close driver server
I0120 14:05:27.625774 1060798 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 14:05:27.625784 1060798 main.go:141] libmachine: Making call to close driver server
I0120 14:05:27.625792 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .Close
I0120 14:05:27.626090 1060798 main.go:141] libmachine: Successfully made call to close driver server
I0120 14:05:27.626113 1060798 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 14:05:27.626136 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | Closing plugin on server side
I0120 14:05:27.642156 1060798 main.go:141] libmachine: Making call to close driver server
I0120 14:05:27.642194 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .Close
I0120 14:05:27.642522 1060798 main.go:141] libmachine: Successfully made call to close driver server
I0120 14:05:27.642553 1060798 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 14:05:27.884652 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0120 14:05:27.884699 1060798 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0120 14:05:28.031119 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0120 14:05:28.031155 1060798 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0120 14:05:28.145159 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0120 14:05:28.145199 1060798 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0120 14:05:28.273725 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0120 14:05:28.273765 1060798 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0120 14:05:28.506539 1060798 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0120 14:05:28.887655 1060798 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.918209178s)
I0120 14:05:28.887715 1060798 main.go:141] libmachine: Making call to close driver server
I0120 14:05:28.887730 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .Close
I0120 14:05:28.888066 1060798 main.go:141] libmachine: Successfully made call to close driver server
I0120 14:05:28.888078 1060798 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 14:05:28.888089 1060798 main.go:141] libmachine: Making call to close driver server
I0120 14:05:28.888098 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .Close
I0120 14:05:28.889637 1060798 main.go:141] libmachine: Successfully made call to close driver server
I0120 14:05:28.889660 1060798 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 14:05:28.889672 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | Closing plugin on server side
I0120 14:05:28.971702 1060798 pod_ready.go:103] pod "kube-apiserver-embed-certs-553677" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:29.421863 1060798 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.940948518s)
I0120 14:05:29.421940 1060798 main.go:141] libmachine: Making call to close driver server
I0120 14:05:29.421960 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .Close
I0120 14:05:29.422340 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | Closing plugin on server side
I0120 14:05:29.422359 1060798 main.go:141] libmachine: Successfully made call to close driver server
I0120 14:05:29.422381 1060798 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 14:05:29.422399 1060798 main.go:141] libmachine: Making call to close driver server
I0120 14:05:29.422412 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .Close
I0120 14:05:29.422673 1060798 main.go:141] libmachine: Successfully made call to close driver server
I0120 14:05:29.422690 1060798 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 14:05:29.422702 1060798 addons.go:479] Verifying addon metrics-server=true in "embed-certs-553677"
I0120 14:05:29.422725 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | Closing plugin on server side
I0120 14:05:30.228977 1060798 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.722367434s)
I0120 14:05:30.229039 1060798 main.go:141] libmachine: Making call to close driver server
I0120 14:05:30.229056 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .Close
I0120 14:05:30.229398 1060798 main.go:141] libmachine: Successfully made call to close driver server
I0120 14:05:30.229421 1060798 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 14:05:30.229431 1060798 main.go:141] libmachine: Making call to close driver server
I0120 14:05:30.229439 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .Close
I0120 14:05:30.229692 1060798 main.go:141] libmachine: Successfully made call to close driver server
I0120 14:05:30.229713 1060798 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 14:05:30.231477 1060798 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p embed-certs-553677 addons enable metrics-server
I0120 14:05:30.233108 1060798 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
I0120 14:05:30.234556 1060798 addons.go:514] duration metric: took 3.752807641s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
I0120 14:05:31.446192 1060798 pod_ready.go:103] pod "kube-apiserver-embed-certs-553677" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:33.453220 1060798 pod_ready.go:103] pod "kube-apiserver-embed-certs-553677" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:35.447702 1060798 pod_ready.go:93] pod "kube-apiserver-embed-certs-553677" in "kube-system" namespace has status "Ready":"True"
I0120 14:05:35.447735 1060798 pod_ready.go:82] duration metric: took 8.507515045s for pod "kube-apiserver-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
I0120 14:05:35.447745 1060798 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
I0120 14:05:35.453130 1060798 pod_ready.go:93] pod "kube-controller-manager-embed-certs-553677" in "kube-system" namespace has status "Ready":"True"
I0120 14:05:35.453158 1060798 pod_ready.go:82] duration metric: took 5.406746ms for pod "kube-controller-manager-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
I0120 14:05:35.453169 1060798 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p5rcq" in "kube-system" namespace to be "Ready" ...
I0120 14:05:35.457466 1060798 pod_ready.go:93] pod "kube-proxy-p5rcq" in "kube-system" namespace has status "Ready":"True"
I0120 14:05:35.457492 1060798 pod_ready.go:82] duration metric: took 4.316578ms for pod "kube-proxy-p5rcq" in "kube-system" namespace to be "Ready" ...
I0120 14:05:35.457503 1060798 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
I0120 14:05:35.462012 1060798 pod_ready.go:93] pod "kube-scheduler-embed-certs-553677" in "kube-system" namespace has status "Ready":"True"
I0120 14:05:35.462036 1060798 pod_ready.go:82] duration metric: took 4.526901ms for pod "kube-scheduler-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
I0120 14:05:35.462043 1060798 pod_ready.go:39] duration metric: took 8.61823381s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0120 14:05:35.462058 1060798 api_server.go:52] waiting for apiserver process to appear ...
I0120 14:05:35.462111 1060798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0120 14:05:35.477958 1060798 api_server.go:72] duration metric: took 8.996279799s to wait for apiserver process to appear ...
I0120 14:05:35.477993 1060798 api_server.go:88] waiting for apiserver healthz status ...
I0120 14:05:35.478019 1060798 api_server.go:253] Checking apiserver healthz at https://192.168.72.136:8443/healthz ...
I0120 14:05:35.483505 1060798 api_server.go:279] https://192.168.72.136:8443/healthz returned 200:
ok
I0120 14:05:35.484660 1060798 api_server.go:141] control plane version: v1.32.0
I0120 14:05:35.484690 1060798 api_server.go:131] duration metric: took 6.687782ms to wait for apiserver health ...
I0120 14:05:35.484701 1060798 system_pods.go:43] waiting for kube-system pods to appear ...
I0120 14:05:35.490073 1060798 system_pods.go:59] 9 kube-system pods found
I0120 14:05:35.490118 1060798 system_pods.go:61] "coredns-668d6bf9bc-6dk7s" [1bba3148-0210-42ef-b08e-753e16365e33] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0120 14:05:35.490129 1060798 system_pods.go:61] "coredns-668d6bf9bc-88phd" [dfc4947e-a505-4337-99d3-156d86f7646c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0120 14:05:35.490137 1060798 system_pods.go:61] "etcd-embed-certs-553677" [c915afbe-8665-4fbf-bcae-802c3ca214dd] Running
I0120 14:05:35.490143 1060798 system_pods.go:61] "kube-apiserver-embed-certs-553677" [d04063fb-d723-4a72-9024-0b6ceba0f09d] Running
I0120 14:05:35.490149 1060798 system_pods.go:61] "kube-controller-manager-embed-certs-553677" [c6de6703-1533-4391-a67e-f2c2208ebafe] Running
I0120 14:05:35.490153 1060798 system_pods.go:61] "kube-proxy-p5rcq" [3a9ddae1-ef67-4dd0-9c18-77e796c37d2a] Running
I0120 14:05:35.490157 1060798 system_pods.go:61] "kube-scheduler-embed-certs-553677" [10c63c3f-0748-4af6-94fb-a0ca644d4c61] Running
I0120 14:05:35.490164 1060798 system_pods.go:61] "metrics-server-f79f97bbb-b92sv" [f9b310a6-0d19-4084-aeae-ebe0a395d042] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0120 14:05:35.490170 1060798 system_pods.go:61] "storage-provisioner" [a6c0070e-1e3c-48af-80e3-1c3ca9163bf8] Running
I0120 14:05:35.490179 1060798 system_pods.go:74] duration metric: took 5.471078ms to wait for pod list to return data ...
I0120 14:05:35.490189 1060798 default_sa.go:34] waiting for default service account to be created ...
I0120 14:05:35.493453 1060798 default_sa.go:45] found service account: "default"
I0120 14:05:35.493489 1060798 default_sa.go:55] duration metric: took 3.2839ms for default service account to be created ...
I0120 14:05:35.493500 1060798 system_pods.go:137] waiting for k8s-apps to be running ...
I0120 14:05:35.648514 1060798 system_pods.go:87] 9 kube-system pods found
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p embed-certs-553677 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.32.0": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-553677 -n embed-certs-553677
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p embed-certs-553677 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-553677 logs -n 25: (1.421405846s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/SecondStart logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
| image | old-k8s-version-743378 image | old-k8s-version-743378 | jenkins | v1.35.0 | 20 Jan 25 14:03 UTC | 20 Jan 25 14:03 UTC |
| | list --format=json | | | | | |
| pause | -p old-k8s-version-743378 | old-k8s-version-743378 | jenkins | v1.35.0 | 20 Jan 25 14:03 UTC | 20 Jan 25 14:03 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p old-k8s-version-743378 | old-k8s-version-743378 | jenkins | v1.35.0 | 20 Jan 25 14:03 UTC | 20 Jan 25 14:03 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p old-k8s-version-743378 | old-k8s-version-743378 | jenkins | v1.35.0 | 20 Jan 25 14:03 UTC | 20 Jan 25 14:03 UTC |
| delete | -p old-k8s-version-743378 | old-k8s-version-743378 | jenkins | v1.35.0 | 20 Jan 25 14:03 UTC | 20 Jan 25 14:03 UTC |
| start | -p newest-cni-488874 --memory=2200 --alsologtostderr | newest-cni-488874 | jenkins | v1.35.0 | 20 Jan 25 14:03 UTC | 20 Jan 25 14:04 UTC |
| | --wait=apiserver,system_pods,default_sa | | | | | |
| | --network-plugin=cni | | | | | |
| | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 | | | | | |
| | --driver=kvm2 --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.0 | | | | | |
| addons | enable metrics-server -p newest-cni-488874 | newest-cni-488874 | jenkins | v1.35.0 | 20 Jan 25 14:04 UTC | 20 Jan 25 14:04 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p newest-cni-488874 | newest-cni-488874 | jenkins | v1.35.0 | 20 Jan 25 14:04 UTC | 20 Jan 25 14:04 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p newest-cni-488874 | newest-cni-488874 | jenkins | v1.35.0 | 20 Jan 25 14:04 UTC | 20 Jan 25 14:04 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p newest-cni-488874 --memory=2200 --alsologtostderr | newest-cni-488874 | jenkins | v1.35.0 | 20 Jan 25 14:04 UTC | 20 Jan 25 14:05 UTC |
| | --wait=apiserver,system_pods,default_sa | | | | | |
| | --network-plugin=cni | | | | | |
| | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 | | | | | |
| | --driver=kvm2 --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.0 | | | | | |
| image | no-preload-097312 image list | no-preload-097312 | jenkins | v1.35.0 | 20 Jan 25 14:05 UTC | 20 Jan 25 14:05 UTC |
| | --format=json | | | | | |
| pause | -p no-preload-097312 | no-preload-097312 | jenkins | v1.35.0 | 20 Jan 25 14:05 UTC | 20 Jan 25 14:05 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p no-preload-097312 | no-preload-097312 | jenkins | v1.35.0 | 20 Jan 25 14:05 UTC | 20 Jan 25 14:05 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p no-preload-097312 | no-preload-097312 | jenkins | v1.35.0 | 20 Jan 25 14:05 UTC | 20 Jan 25 14:05 UTC |
| delete | -p no-preload-097312 | no-preload-097312 | jenkins | v1.35.0 | 20 Jan 25 14:05 UTC | 20 Jan 25 14:05 UTC |
| image | newest-cni-488874 image list | newest-cni-488874 | jenkins | v1.35.0 | 20 Jan 25 14:05 UTC | 20 Jan 25 14:05 UTC |
| | --format=json | | | | | |
| pause | -p newest-cni-488874 | newest-cni-488874 | jenkins | v1.35.0 | 20 Jan 25 14:05 UTC | 20 Jan 25 14:05 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p newest-cni-488874 | newest-cni-488874 | jenkins | v1.35.0 | 20 Jan 25 14:05 UTC | 20 Jan 25 14:05 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p newest-cni-488874 | newest-cni-488874 | jenkins | v1.35.0 | 20 Jan 25 14:05 UTC | 20 Jan 25 14:05 UTC |
| delete | -p newest-cni-488874 | newest-cni-488874 | jenkins | v1.35.0 | 20 Jan 25 14:05 UTC | 20 Jan 25 14:05 UTC |
| image | default-k8s-diff-port-901416 | default-k8s-diff-port-901416 | jenkins | v1.35.0 | 20 Jan 25 14:06 UTC | 20 Jan 25 14:06 UTC |
| | image list --format=json | | | | | |
| pause | -p | default-k8s-diff-port-901416 | jenkins | v1.35.0 | 20 Jan 25 14:06 UTC | 20 Jan 25 14:06 UTC |
| | default-k8s-diff-port-901416 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p | default-k8s-diff-port-901416 | jenkins | v1.35.0 | 20 Jan 25 14:06 UTC | 20 Jan 25 14:06 UTC |
| | default-k8s-diff-port-901416 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p | default-k8s-diff-port-901416 | jenkins | v1.35.0 | 20 Jan 25 14:06 UTC | 20 Jan 25 14:06 UTC |
| | default-k8s-diff-port-901416 | | | | | |
| delete | -p | default-k8s-diff-port-901416 | jenkins | v1.35.0 | 20 Jan 25 14:06 UTC | 20 Jan 25 14:06 UTC |
| | default-k8s-diff-port-901416 | | | | | |
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2025/01/20 14:04:47
Running on machine: ubuntu-20-agent
Binary: Built with gc go1.23.4 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0120 14:04:47.050101 1063160 out.go:345] Setting OutFile to fd 1 ...
I0120 14:04:47.050227 1063160 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 14:04:47.050232 1063160 out.go:358] Setting ErrFile to fd 2...
I0120 14:04:47.050237 1063160 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 14:04:47.050499 1063160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-998973/.minikube/bin
I0120 14:04:47.051203 1063160 out.go:352] Setting JSON to false
I0120 14:04:47.052449 1063160 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":13629,"bootTime":1737368258,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0120 14:04:47.052579 1063160 start.go:139] virtualization: kvm guest
I0120 14:04:47.055235 1063160 out.go:177] * [newest-cni-488874] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
I0120 14:04:47.056951 1063160 out.go:177] - MINIKUBE_LOCATION=20242
I0120 14:04:47.056934 1063160 notify.go:220] Checking for updates...
I0120 14:04:47.058630 1063160 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0120 14:04:47.060396 1063160 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20242-998973/kubeconfig
I0120 14:04:47.061968 1063160 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-998973/.minikube
I0120 14:04:47.063408 1063160 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0120 14:04:47.064917 1063160 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0120 14:04:47.066668 1063160 config.go:182] Loaded profile config "newest-cni-488874": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 14:04:47.067111 1063160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 14:04:47.067182 1063160 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 14:04:47.083702 1063160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37981
I0120 14:04:47.084272 1063160 main.go:141] libmachine: () Calling .GetVersion
I0120 14:04:47.084954 1063160 main.go:141] libmachine: Using API Version 1
I0120 14:04:47.084998 1063160 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 14:04:47.085439 1063160 main.go:141] libmachine: () Calling .GetMachineName
I0120 14:04:47.085687 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .DriverName
I0120 14:04:47.086006 1063160 driver.go:394] Setting default libvirt URI to qemu:///system
I0120 14:04:47.086434 1063160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 14:04:47.086492 1063160 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 14:04:47.103220 1063160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42829
I0120 14:04:47.103721 1063160 main.go:141] libmachine: () Calling .GetVersion
I0120 14:04:47.104507 1063160 main.go:141] libmachine: Using API Version 1
I0120 14:04:47.104547 1063160 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 14:04:47.104876 1063160 main.go:141] libmachine: () Calling .GetMachineName
I0120 14:04:47.105165 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .DriverName
I0120 14:04:47.143032 1063160 out.go:177] * Using the kvm2 driver based on existing profile
I0120 14:04:47.144670 1063160 start.go:297] selected driver: kvm2
I0120 14:04:47.144697 1063160 start.go:901] validating driver "kvm2" against &{Name:newest-cni-488874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:newest-cni-488874 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.166 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Listen
Address: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0120 14:04:47.144885 1063160 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0120 14:04:47.145958 1063160 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 14:04:47.146076 1063160 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20242-998973/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0120 14:04:47.162250 1063160 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
I0120 14:04:47.162842 1063160 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
I0120 14:04:47.162911 1063160 cni.go:84] Creating CNI manager for ""
I0120 14:04:47.162986 1063160 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0120 14:04:47.163055 1063160 start.go:340] cluster config:
{Name:newest-cni-488874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:newest-cni-488874 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.166 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0120 14:04:47.163221 1063160 iso.go:125] acquiring lock: {Name:mk63965bcac7e5d2166c667dd03e4270f636bd53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 14:04:47.165513 1063160 out.go:177] * Starting "newest-cni-488874" primary control-plane node in "newest-cni-488874" cluster
I0120 14:04:47.167021 1063160 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
I0120 14:04:47.167079 1063160 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20242-998973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-amd64.tar.lz4
I0120 14:04:47.167105 1063160 cache.go:56] Caching tarball of preloaded images
I0120 14:04:47.167264 1063160 preload.go:172] Found /home/jenkins/minikube-integration/20242-998973/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0120 14:04:47.167288 1063160 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on containerd
I0120 14:04:47.167435 1063160 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/newest-cni-488874/config.json ...
I0120 14:04:47.167717 1063160 start.go:360] acquireMachinesLock for newest-cni-488874: {Name:mk36ae0f7b2d42a8734a6403f72836860fc4ccfa Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0120 14:04:47.167792 1063160 start.go:364] duration metric: took 47.776µs to acquireMachinesLock for "newest-cni-488874"
I0120 14:04:47.167814 1063160 start.go:96] Skipping create...Using existing machine configuration
I0120 14:04:47.167822 1063160 fix.go:54] fixHost starting:
I0120 14:04:47.168125 1063160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 14:04:47.168164 1063160 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 14:04:47.183549 1063160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43433
I0120 14:04:47.184104 1063160 main.go:141] libmachine: () Calling .GetVersion
I0120 14:04:47.184711 1063160 main.go:141] libmachine: Using API Version 1
I0120 14:04:47.184744 1063160 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 14:04:47.185155 1063160 main.go:141] libmachine: () Calling .GetMachineName
I0120 14:04:47.185366 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .DriverName
I0120 14:04:47.185574 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetState
I0120 14:04:47.187388 1063160 fix.go:112] recreateIfNeeded on newest-cni-488874: state=Stopped err=<nil>
I0120 14:04:47.187412 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .DriverName
W0120 14:04:47.187603 1063160 fix.go:138] unexpected machine state, will restart: <nil>
I0120 14:04:47.189877 1063160 out.go:177] * Restarting existing kvm2 VM for "newest-cni-488874" ...
I0120 14:04:45.425579 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:47.926756 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:46.868852 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:48.870219 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:46.915776 1060619 pod_ready.go:103] pod "metrics-server-f79f97bbb-4wzdk" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:48.916552 1060619 pod_ready.go:103] pod "metrics-server-f79f97bbb-4wzdk" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:51.415545 1060619 pod_ready.go:103] pod "metrics-server-f79f97bbb-4wzdk" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:47.191455 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .Start
I0120 14:04:47.191771 1063160 main.go:141] libmachine: (newest-cni-488874) starting domain...
I0120 14:04:47.191797 1063160 main.go:141] libmachine: (newest-cni-488874) ensuring networks are active...
I0120 14:04:47.192792 1063160 main.go:141] libmachine: (newest-cni-488874) Ensuring network default is active
I0120 14:04:47.193160 1063160 main.go:141] libmachine: (newest-cni-488874) Ensuring network mk-newest-cni-488874 is active
I0120 14:04:47.193642 1063160 main.go:141] libmachine: (newest-cni-488874) getting domain XML...
I0120 14:04:47.194500 1063160 main.go:141] libmachine: (newest-cni-488874) creating domain...
I0120 14:04:48.526775 1063160 main.go:141] libmachine: (newest-cni-488874) waiting for IP...
I0120 14:04:48.527710 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:04:48.528359 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | unable to find current IP address of domain newest-cni-488874 in network mk-newest-cni-488874
I0120 14:04:48.528470 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | I0120 14:04:48.528313 1063195 retry.go:31] will retry after 228.063414ms: waiting for domain to come up
I0120 14:04:48.757843 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:04:48.758439 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | unable to find current IP address of domain newest-cni-488874 in network mk-newest-cni-488874
I0120 14:04:48.758480 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | I0120 14:04:48.758409 1063195 retry.go:31] will retry after 375.398282ms: waiting for domain to come up
I0120 14:04:49.135078 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:04:49.135653 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | unable to find current IP address of domain newest-cni-488874 in network mk-newest-cni-488874
I0120 14:04:49.135704 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | I0120 14:04:49.135605 1063195 retry.go:31] will retry after 439.758196ms: waiting for domain to come up
I0120 14:04:49.577514 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:04:49.578119 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | unable to find current IP address of domain newest-cni-488874 in network mk-newest-cni-488874
I0120 14:04:49.578170 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | I0120 14:04:49.578078 1063195 retry.go:31] will retry after 456.356276ms: waiting for domain to come up
I0120 14:04:50.035835 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:04:50.036421 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | unable to find current IP address of domain newest-cni-488874 in network mk-newest-cni-488874
I0120 14:04:50.036455 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | I0120 14:04:50.036381 1063195 retry.go:31] will retry after 602.99846ms: waiting for domain to come up
I0120 14:04:50.641379 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:04:50.642024 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | unable to find current IP address of domain newest-cni-488874 in network mk-newest-cni-488874
I0120 14:04:50.642052 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | I0120 14:04:50.641984 1063195 retry.go:31] will retry after 929.982744ms: waiting for domain to come up
I0120 14:04:51.573106 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:04:51.573644 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | unable to find current IP address of domain newest-cni-488874 in network mk-newest-cni-488874
I0120 14:04:51.573676 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | I0120 14:04:51.573578 1063195 retry.go:31] will retry after 800.371471ms: waiting for domain to come up
I0120 14:04:50.427277 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:52.925532 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:51.369069 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:53.369540 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:51.914831 1060619 pod_ready.go:82] duration metric: took 4m0.007391522s for pod "metrics-server-f79f97bbb-4wzdk" in "kube-system" namespace to be "Ready" ...
E0120 14:04:51.914867 1060619 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I0120 14:04:51.914878 1060619 pod_ready.go:39] duration metric: took 4m7.421521073s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0120 14:04:51.914899 1060619 api_server.go:52] waiting for apiserver process to appear ...
I0120 14:04:51.914936 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0120 14:04:51.915002 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0120 14:04:51.972482 1060619 cri.go:89] found id: "7d57d176b780f0dee8918c4bb2ffdac2adf18c59ca9ce5f212fc40e769598a31"
I0120 14:04:51.972517 1060619 cri.go:89] found id: "02072e0f6c71b7c3c1a28f20c9666d56366fed158817c04f41ef341dd5bb8bad"
I0120 14:04:51.972524 1060619 cri.go:89] found id: ""
I0120 14:04:51.972535 1060619 logs.go:282] 2 containers: [7d57d176b780f0dee8918c4bb2ffdac2adf18c59ca9ce5f212fc40e769598a31 02072e0f6c71b7c3c1a28f20c9666d56366fed158817c04f41ef341dd5bb8bad]
I0120 14:04:51.972606 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:04:51.978179 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:04:51.987282 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0120 14:04:51.987420 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0120 14:04:52.032979 1060619 cri.go:89] found id: "9f7602bf7f00336f9899c8f95fb6cb0da4bb3f24c8461020ea132f5e30bfc77b"
I0120 14:04:52.033017 1060619 cri.go:89] found id: "55a01a00e24810acb396db166bcf4de340b5de70dd8efde4d3df9ea7a41e7512"
I0120 14:04:52.033024 1060619 cri.go:89] found id: ""
I0120 14:04:52.033035 1060619 logs.go:282] 2 containers: [9f7602bf7f00336f9899c8f95fb6cb0da4bb3f24c8461020ea132f5e30bfc77b 55a01a00e24810acb396db166bcf4de340b5de70dd8efde4d3df9ea7a41e7512]
I0120 14:04:52.033107 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:04:52.039652 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:04:52.044848 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0120 14:04:52.044932 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0120 14:04:52.096249 1060619 cri.go:89] found id: "adc3dcfdc282af9201e7826e9fcd6c80a16c529ac58d5176bb90e1cd5b694316"
I0120 14:04:52.096283 1060619 cri.go:89] found id: "41e92150a94ed565f09abffe11a56cfec14b623b375021c8a91497067beb8246"
I0120 14:04:52.096289 1060619 cri.go:89] found id: ""
I0120 14:04:52.096300 1060619 logs.go:282] 2 containers: [adc3dcfdc282af9201e7826e9fcd6c80a16c529ac58d5176bb90e1cd5b694316 41e92150a94ed565f09abffe11a56cfec14b623b375021c8a91497067beb8246]
I0120 14:04:52.096369 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:04:52.101358 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:04:52.106095 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0120 14:04:52.106169 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0120 14:04:52.154285 1060619 cri.go:89] found id: "fe1fbba5fa5fb0856252eca425577134e14324efe099e6728381e4992272e807"
I0120 14:04:52.154318 1060619 cri.go:89] found id: "b5717a9b2f696c83b68c5031409351d9836f275c35603535d2c628cb0907cc3a"
I0120 14:04:52.154323 1060619 cri.go:89] found id: ""
I0120 14:04:52.154331 1060619 logs.go:282] 2 containers: [fe1fbba5fa5fb0856252eca425577134e14324efe099e6728381e4992272e807 b5717a9b2f696c83b68c5031409351d9836f275c35603535d2c628cb0907cc3a]
I0120 14:04:52.154382 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:04:52.159475 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:04:52.164277 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0120 14:04:52.164353 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0120 14:04:52.204626 1060619 cri.go:89] found id: "d17910e47dd842ffca9a44dfce87a74c7177842130b2a885bf690c951756b332"
I0120 14:04:52.204657 1060619 cri.go:89] found id: "690ac9360bae5844259f7057d0e04e44eef56919c5be52122b44ae555dfa5527"
I0120 14:04:52.204663 1060619 cri.go:89] found id: ""
I0120 14:04:52.204674 1060619 logs.go:282] 2 containers: [d17910e47dd842ffca9a44dfce87a74c7177842130b2a885bf690c951756b332 690ac9360bae5844259f7057d0e04e44eef56919c5be52122b44ae555dfa5527]
I0120 14:04:52.204736 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:04:52.209519 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:04:52.213820 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0120 14:04:52.213885 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0120 14:04:52.257332 1060619 cri.go:89] found id: "68ed03cac6c2094aa12bce8699936927c3fdafc446f7c9ad0d7a3a2e8e12a787"
I0120 14:04:52.257364 1060619 cri.go:89] found id: "72e9caf6884b449852b51ae16e54bdc6b548047e61e8508d14e9b206f4813e67"
I0120 14:04:52.257371 1060619 cri.go:89] found id: ""
I0120 14:04:52.257382 1060619 logs.go:282] 2 containers: [68ed03cac6c2094aa12bce8699936927c3fdafc446f7c9ad0d7a3a2e8e12a787 72e9caf6884b449852b51ae16e54bdc6b548047e61e8508d14e9b206f4813e67]
I0120 14:04:52.257446 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:04:52.263188 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:04:52.269822 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0120 14:04:52.269897 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0120 14:04:52.312509 1060619 cri.go:89] found id: ""
I0120 14:04:52.312539 1060619 logs.go:282] 0 containers: []
W0120 14:04:52.312548 1060619 logs.go:284] No container was found matching "kindnet"
I0120 14:04:52.312562 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0120 14:04:52.312618 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0120 14:04:52.360717 1060619 cri.go:89] found id: "19a20bd63c977a83ca20049fb784ae6943714d985533dab65c1289853eee2e7a"
I0120 14:04:52.360745 1060619 cri.go:89] found id: ""
I0120 14:04:52.360756 1060619 logs.go:282] 1 containers: [19a20bd63c977a83ca20049fb784ae6943714d985533dab65c1289853eee2e7a]
I0120 14:04:52.360832 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:04:52.366217 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0120 14:04:52.366308 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0120 14:04:52.415084 1060619 cri.go:89] found id: "d304b124b2bae2839b5f866cf991bea6331d185397d6f481f8bb8e19e630dafc"
I0120 14:04:52.415123 1060619 cri.go:89] found id: "1180fd6f5db5f0f6f154c5f767d568e7d23eb7839a012915d033ff5796fde416"
I0120 14:04:52.415129 1060619 cri.go:89] found id: ""
I0120 14:04:52.415140 1060619 logs.go:282] 2 containers: [d304b124b2bae2839b5f866cf991bea6331d185397d6f481f8bb8e19e630dafc 1180fd6f5db5f0f6f154c5f767d568e7d23eb7839a012915d033ff5796fde416]
I0120 14:04:52.415218 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:04:52.419894 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:04:52.424668 1060619 logs.go:123] Gathering logs for kube-controller-manager [68ed03cac6c2094aa12bce8699936927c3fdafc446f7c9ad0d7a3a2e8e12a787] ...
I0120 14:04:52.424696 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68ed03cac6c2094aa12bce8699936927c3fdafc446f7c9ad0d7a3a2e8e12a787"
I0120 14:04:52.489085 1060619 logs.go:123] Gathering logs for kubernetes-dashboard [19a20bd63c977a83ca20049fb784ae6943714d985533dab65c1289853eee2e7a] ...
I0120 14:04:52.489131 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19a20bd63c977a83ca20049fb784ae6943714d985533dab65c1289853eee2e7a"
I0120 14:04:52.536894 1060619 logs.go:123] Gathering logs for kube-scheduler [fe1fbba5fa5fb0856252eca425577134e14324efe099e6728381e4992272e807] ...
I0120 14:04:52.536937 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe1fbba5fa5fb0856252eca425577134e14324efe099e6728381e4992272e807"
I0120 14:04:52.577327 1060619 logs.go:123] Gathering logs for kube-scheduler [b5717a9b2f696c83b68c5031409351d9836f275c35603535d2c628cb0907cc3a] ...
I0120 14:04:52.577371 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5717a9b2f696c83b68c5031409351d9836f275c35603535d2c628cb0907cc3a"
I0120 14:04:52.635187 1060619 logs.go:123] Gathering logs for kube-proxy [690ac9360bae5844259f7057d0e04e44eef56919c5be52122b44ae555dfa5527] ...
I0120 14:04:52.635246 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 690ac9360bae5844259f7057d0e04e44eef56919c5be52122b44ae555dfa5527"
I0120 14:04:52.678528 1060619 logs.go:123] Gathering logs for containerd ...
I0120 14:04:52.678570 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0120 14:04:52.739780 1060619 logs.go:123] Gathering logs for container status ...
I0120 14:04:52.739830 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0120 14:04:52.791166 1060619 logs.go:123] Gathering logs for describe nodes ...
I0120 14:04:52.791233 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0120 14:04:52.961331 1060619 logs.go:123] Gathering logs for etcd [55a01a00e24810acb396db166bcf4de340b5de70dd8efde4d3df9ea7a41e7512] ...
I0120 14:04:52.961376 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55a01a00e24810acb396db166bcf4de340b5de70dd8efde4d3df9ea7a41e7512"
I0120 14:04:53.045232 1060619 logs.go:123] Gathering logs for storage-provisioner [1180fd6f5db5f0f6f154c5f767d568e7d23eb7839a012915d033ff5796fde416] ...
I0120 14:04:53.045281 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1180fd6f5db5f0f6f154c5f767d568e7d23eb7839a012915d033ff5796fde416"
I0120 14:04:53.093889 1060619 logs.go:123] Gathering logs for kube-controller-manager [72e9caf6884b449852b51ae16e54bdc6b548047e61e8508d14e9b206f4813e67] ...
I0120 14:04:53.093950 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72e9caf6884b449852b51ae16e54bdc6b548047e61e8508d14e9b206f4813e67"
I0120 14:04:53.174518 1060619 logs.go:123] Gathering logs for storage-provisioner [d304b124b2bae2839b5f866cf991bea6331d185397d6f481f8bb8e19e630dafc] ...
I0120 14:04:53.174565 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d304b124b2bae2839b5f866cf991bea6331d185397d6f481f8bb8e19e630dafc"
I0120 14:04:53.221380 1060619 logs.go:123] Gathering logs for kubelet ...
I0120 14:04:53.221424 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0120 14:04:53.303548 1060619 logs.go:123] Gathering logs for dmesg ...
I0120 14:04:53.303629 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0120 14:04:53.319656 1060619 logs.go:123] Gathering logs for coredns [adc3dcfdc282af9201e7826e9fcd6c80a16c529ac58d5176bb90e1cd5b694316] ...
I0120 14:04:53.319700 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adc3dcfdc282af9201e7826e9fcd6c80a16c529ac58d5176bb90e1cd5b694316"
I0120 14:04:53.363932 1060619 logs.go:123] Gathering logs for coredns [41e92150a94ed565f09abffe11a56cfec14b623b375021c8a91497067beb8246] ...
I0120 14:04:53.363976 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41e92150a94ed565f09abffe11a56cfec14b623b375021c8a91497067beb8246"
I0120 14:04:53.425306 1060619 logs.go:123] Gathering logs for kube-proxy [d17910e47dd842ffca9a44dfce87a74c7177842130b2a885bf690c951756b332] ...
I0120 14:04:53.425353 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d17910e47dd842ffca9a44dfce87a74c7177842130b2a885bf690c951756b332"
I0120 14:04:53.479186 1060619 logs.go:123] Gathering logs for kube-apiserver [7d57d176b780f0dee8918c4bb2ffdac2adf18c59ca9ce5f212fc40e769598a31] ...
I0120 14:04:53.479230 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d57d176b780f0dee8918c4bb2ffdac2adf18c59ca9ce5f212fc40e769598a31"
I0120 14:04:53.537133 1060619 logs.go:123] Gathering logs for kube-apiserver [02072e0f6c71b7c3c1a28f20c9666d56366fed158817c04f41ef341dd5bb8bad] ...
I0120 14:04:53.537190 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02072e0f6c71b7c3c1a28f20c9666d56366fed158817c04f41ef341dd5bb8bad"
I0120 14:04:53.587036 1060619 logs.go:123] Gathering logs for etcd [9f7602bf7f00336f9899c8f95fb6cb0da4bb3f24c8461020ea132f5e30bfc77b] ...
I0120 14:04:53.587082 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9f7602bf7f00336f9899c8f95fb6cb0da4bb3f24c8461020ea132f5e30bfc77b"
I0120 14:04:56.146948 1060619 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0120 14:04:56.165984 1060619 api_server.go:72] duration metric: took 4m18.967999913s to wait for apiserver process to appear ...
I0120 14:04:56.166016 1060619 api_server.go:88] waiting for apiserver healthz status ...
I0120 14:04:56.166056 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0120 14:04:56.166126 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0120 14:04:56.216149 1060619 cri.go:89] found id: "7d57d176b780f0dee8918c4bb2ffdac2adf18c59ca9ce5f212fc40e769598a31"
I0120 14:04:56.216180 1060619 cri.go:89] found id: "02072e0f6c71b7c3c1a28f20c9666d56366fed158817c04f41ef341dd5bb8bad"
I0120 14:04:56.216185 1060619 cri.go:89] found id: ""
I0120 14:04:56.216195 1060619 logs.go:282] 2 containers: [7d57d176b780f0dee8918c4bb2ffdac2adf18c59ca9ce5f212fc40e769598a31 02072e0f6c71b7c3c1a28f20c9666d56366fed158817c04f41ef341dd5bb8bad]
I0120 14:04:56.216261 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:04:56.221620 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:04:56.227539 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0120 14:04:56.227642 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0120 14:04:56.271909 1060619 cri.go:89] found id: "9f7602bf7f00336f9899c8f95fb6cb0da4bb3f24c8461020ea132f5e30bfc77b"
I0120 14:04:56.271946 1060619 cri.go:89] found id: "55a01a00e24810acb396db166bcf4de340b5de70dd8efde4d3df9ea7a41e7512"
I0120 14:04:56.271952 1060619 cri.go:89] found id: ""
I0120 14:04:56.271964 1060619 logs.go:282] 2 containers: [9f7602bf7f00336f9899c8f95fb6cb0da4bb3f24c8461020ea132f5e30bfc77b 55a01a00e24810acb396db166bcf4de340b5de70dd8efde4d3df9ea7a41e7512]
I0120 14:04:56.272035 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:04:56.278155 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:04:56.283955 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0120 14:04:56.284047 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0120 14:04:56.328236 1060619 cri.go:89] found id: "adc3dcfdc282af9201e7826e9fcd6c80a16c529ac58d5176bb90e1cd5b694316"
I0120 14:04:56.328271 1060619 cri.go:89] found id: "41e92150a94ed565f09abffe11a56cfec14b623b375021c8a91497067beb8246"
I0120 14:04:56.328277 1060619 cri.go:89] found id: ""
I0120 14:04:56.328288 1060619 logs.go:282] 2 containers: [adc3dcfdc282af9201e7826e9fcd6c80a16c529ac58d5176bb90e1cd5b694316 41e92150a94ed565f09abffe11a56cfec14b623b375021c8a91497067beb8246]
I0120 14:04:56.328364 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:04:56.334015 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:04:56.339913 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0120 14:04:56.340003 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0120 14:04:56.393554 1060619 cri.go:89] found id: "fe1fbba5fa5fb0856252eca425577134e14324efe099e6728381e4992272e807"
I0120 14:04:56.393592 1060619 cri.go:89] found id: "b5717a9b2f696c83b68c5031409351d9836f275c35603535d2c628cb0907cc3a"
I0120 14:04:56.393600 1060619 cri.go:89] found id: ""
I0120 14:04:56.393612 1060619 logs.go:282] 2 containers: [fe1fbba5fa5fb0856252eca425577134e14324efe099e6728381e4992272e807 b5717a9b2f696c83b68c5031409351d9836f275c35603535d2c628cb0907cc3a]
I0120 14:04:56.393685 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:04:56.400490 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:04:56.407736 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0120 14:04:56.407844 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0120 14:04:52.375493 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:04:52.376103 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | unable to find current IP address of domain newest-cni-488874 in network mk-newest-cni-488874
I0120 14:04:52.376133 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | I0120 14:04:52.376063 1063195 retry.go:31] will retry after 1.091722591s: waiting for domain to come up
I0120 14:04:53.469641 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:04:53.470320 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | unable to find current IP address of domain newest-cni-488874 in network mk-newest-cni-488874
I0120 14:04:53.470350 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | I0120 14:04:53.470265 1063195 retry.go:31] will retry after 1.304505368s: waiting for domain to come up
I0120 14:04:54.776482 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:04:54.777187 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | unable to find current IP address of domain newest-cni-488874 in network mk-newest-cni-488874
I0120 14:04:54.777216 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | I0120 14:04:54.777099 1063195 retry.go:31] will retry after 1.932003229s: waiting for domain to come up
I0120 14:04:56.711489 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:04:56.712094 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | unable to find current IP address of domain newest-cni-488874 in network mk-newest-cni-488874
I0120 14:04:56.712128 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | I0120 14:04:56.712033 1063195 retry.go:31] will retry after 1.877119762s: waiting for domain to come up
I0120 14:04:54.926430 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:57.426323 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:55.868690 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:57.869554 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
I0120 14:04:56.471585 1060619 cri.go:89] found id: "d17910e47dd842ffca9a44dfce87a74c7177842130b2a885bf690c951756b332"
I0120 14:04:56.471616 1060619 cri.go:89] found id: "690ac9360bae5844259f7057d0e04e44eef56919c5be52122b44ae555dfa5527"
I0120 14:04:56.471622 1060619 cri.go:89] found id: ""
I0120 14:04:56.471633 1060619 logs.go:282] 2 containers: [d17910e47dd842ffca9a44dfce87a74c7177842130b2a885bf690c951756b332 690ac9360bae5844259f7057d0e04e44eef56919c5be52122b44ae555dfa5527]
I0120 14:04:56.471707 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:04:56.477704 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:04:56.483023 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0120 14:04:56.483126 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0120 14:04:56.544017 1060619 cri.go:89] found id: "68ed03cac6c2094aa12bce8699936927c3fdafc446f7c9ad0d7a3a2e8e12a787"
I0120 14:04:56.544046 1060619 cri.go:89] found id: "72e9caf6884b449852b51ae16e54bdc6b548047e61e8508d14e9b206f4813e67"
I0120 14:04:56.544053 1060619 cri.go:89] found id: ""
I0120 14:04:56.544063 1060619 logs.go:282] 2 containers: [68ed03cac6c2094aa12bce8699936927c3fdafc446f7c9ad0d7a3a2e8e12a787 72e9caf6884b449852b51ae16e54bdc6b548047e61e8508d14e9b206f4813e67]
I0120 14:04:56.544136 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:04:56.548798 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:04:56.554021 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0120 14:04:56.554093 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0120 14:04:56.604780 1060619 cri.go:89] found id: ""
I0120 14:04:56.604824 1060619 logs.go:282] 0 containers: []
W0120 14:04:56.604837 1060619 logs.go:284] No container was found matching "kindnet"
I0120 14:04:56.604845 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0120 14:04:56.604922 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0120 14:04:56.646325 1060619 cri.go:89] found id: "19a20bd63c977a83ca20049fb784ae6943714d985533dab65c1289853eee2e7a"
I0120 14:04:56.646359 1060619 cri.go:89] found id: ""
I0120 14:04:56.646371 1060619 logs.go:282] 1 containers: [19a20bd63c977a83ca20049fb784ae6943714d985533dab65c1289853eee2e7a]
I0120 14:04:56.646439 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:04:56.651126 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0120 14:04:56.651234 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0120 14:04:56.694400 1060619 cri.go:89] found id: "d304b124b2bae2839b5f866cf991bea6331d185397d6f481f8bb8e19e630dafc"
I0120 14:04:56.694443 1060619 cri.go:89] found id: "1180fd6f5db5f0f6f154c5f767d568e7d23eb7839a012915d033ff5796fde416"
I0120 14:04:56.694449 1060619 cri.go:89] found id: ""
I0120 14:04:56.694459 1060619 logs.go:282] 2 containers: [d304b124b2bae2839b5f866cf991bea6331d185397d6f481f8bb8e19e630dafc 1180fd6f5db5f0f6f154c5f767d568e7d23eb7839a012915d033ff5796fde416]
I0120 14:04:56.694539 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:04:56.701264 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:04:56.707843 1060619 logs.go:123] Gathering logs for kubelet ...
I0120 14:04:56.707878 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0120 14:04:56.810155 1060619 logs.go:123] Gathering logs for etcd [9f7602bf7f00336f9899c8f95fb6cb0da4bb3f24c8461020ea132f5e30bfc77b] ...
I0120 14:04:56.810208 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9f7602bf7f00336f9899c8f95fb6cb0da4bb3f24c8461020ea132f5e30bfc77b"
I0120 14:04:56.878486 1060619 logs.go:123] Gathering logs for etcd [55a01a00e24810acb396db166bcf4de340b5de70dd8efde4d3df9ea7a41e7512] ...
I0120 14:04:56.878584 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55a01a00e24810acb396db166bcf4de340b5de70dd8efde4d3df9ea7a41e7512"
I0120 14:04:56.984323 1060619 logs.go:123] Gathering logs for kube-scheduler [fe1fbba5fa5fb0856252eca425577134e14324efe099e6728381e4992272e807] ...
I0120 14:04:56.984370 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe1fbba5fa5fb0856252eca425577134e14324efe099e6728381e4992272e807"
I0120 14:04:57.030429 1060619 logs.go:123] Gathering logs for kube-proxy [690ac9360bae5844259f7057d0e04e44eef56919c5be52122b44ae555dfa5527] ...
I0120 14:04:57.030485 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 690ac9360bae5844259f7057d0e04e44eef56919c5be52122b44ae555dfa5527"
I0120 14:04:57.075957 1060619 logs.go:123] Gathering logs for kube-controller-manager [68ed03cac6c2094aa12bce8699936927c3fdafc446f7c9ad0d7a3a2e8e12a787] ...
I0120 14:04:57.076008 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68ed03cac6c2094aa12bce8699936927c3fdafc446f7c9ad0d7a3a2e8e12a787"
I0120 14:04:57.151785 1060619 logs.go:123] Gathering logs for storage-provisioner [1180fd6f5db5f0f6f154c5f767d568e7d23eb7839a012915d033ff5796fde416] ...
I0120 14:04:57.151851 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1180fd6f5db5f0f6f154c5f767d568e7d23eb7839a012915d033ff5796fde416"
I0120 14:04:57.200132 1060619 logs.go:123] Gathering logs for dmesg ...
I0120 14:04:57.200178 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0120 14:04:57.221442 1060619 logs.go:123] Gathering logs for describe nodes ...
I0120 14:04:57.221495 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0120 14:04:57.353366 1060619 logs.go:123] Gathering logs for kube-apiserver [7d57d176b780f0dee8918c4bb2ffdac2adf18c59ca9ce5f212fc40e769598a31] ...
I0120 14:04:57.353421 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d57d176b780f0dee8918c4bb2ffdac2adf18c59ca9ce5f212fc40e769598a31"
I0120 14:04:57.427690 1060619 logs.go:123] Gathering logs for kube-apiserver [02072e0f6c71b7c3c1a28f20c9666d56366fed158817c04f41ef341dd5bb8bad] ...
I0120 14:04:57.427726 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02072e0f6c71b7c3c1a28f20c9666d56366fed158817c04f41ef341dd5bb8bad"
I0120 14:04:57.502048 1060619 logs.go:123] Gathering logs for kube-controller-manager [72e9caf6884b449852b51ae16e54bdc6b548047e61e8508d14e9b206f4813e67] ...
I0120 14:04:57.502097 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72e9caf6884b449852b51ae16e54bdc6b548047e61e8508d14e9b206f4813e67"
I0120 14:04:57.566324 1060619 logs.go:123] Gathering logs for kubernetes-dashboard [19a20bd63c977a83ca20049fb784ae6943714d985533dab65c1289853eee2e7a] ...
I0120 14:04:57.566369 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19a20bd63c977a83ca20049fb784ae6943714d985533dab65c1289853eee2e7a"
I0120 14:04:57.614013 1060619 logs.go:123] Gathering logs for coredns [adc3dcfdc282af9201e7826e9fcd6c80a16c529ac58d5176bb90e1cd5b694316] ...
I0120 14:04:57.614063 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adc3dcfdc282af9201e7826e9fcd6c80a16c529ac58d5176bb90e1cd5b694316"
I0120 14:04:57.671629 1060619 logs.go:123] Gathering logs for coredns [41e92150a94ed565f09abffe11a56cfec14b623b375021c8a91497067beb8246] ...
I0120 14:04:57.671670 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41e92150a94ed565f09abffe11a56cfec14b623b375021c8a91497067beb8246"
I0120 14:04:57.733137 1060619 logs.go:123] Gathering logs for kube-scheduler [b5717a9b2f696c83b68c5031409351d9836f275c35603535d2c628cb0907cc3a] ...
I0120 14:04:57.733192 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5717a9b2f696c83b68c5031409351d9836f275c35603535d2c628cb0907cc3a"
I0120 14:04:57.795230 1060619 logs.go:123] Gathering logs for kube-proxy [d17910e47dd842ffca9a44dfce87a74c7177842130b2a885bf690c951756b332] ...
I0120 14:04:57.795287 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d17910e47dd842ffca9a44dfce87a74c7177842130b2a885bf690c951756b332"
I0120 14:04:57.850704 1060619 logs.go:123] Gathering logs for storage-provisioner [d304b124b2bae2839b5f866cf991bea6331d185397d6f481f8bb8e19e630dafc] ...
I0120 14:04:57.850745 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d304b124b2bae2839b5f866cf991bea6331d185397d6f481f8bb8e19e630dafc"
I0120 14:04:57.913118 1060619 logs.go:123] Gathering logs for containerd ...
I0120 14:04:57.913164 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0120 14:04:57.987033 1060619 logs.go:123] Gathering logs for container status ...
I0120 14:04:57.987081 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0120 14:05:00.546303 1060619 api_server.go:253] Checking apiserver healthz at https://192.168.61.149:8443/healthz ...
I0120 14:05:00.555978 1060619 api_server.go:279] https://192.168.61.149:8443/healthz returned 200:
ok
I0120 14:05:00.557505 1060619 api_server.go:141] control plane version: v1.32.0
I0120 14:05:00.557538 1060619 api_server.go:131] duration metric: took 4.391514556s to wait for apiserver health ...
I0120 14:05:00.557550 1060619 system_pods.go:43] waiting for kube-system pods to appear ...
I0120 14:05:00.557582 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0120 14:05:00.557652 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0120 14:05:00.619715 1060619 cri.go:89] found id: "7d57d176b780f0dee8918c4bb2ffdac2adf18c59ca9ce5f212fc40e769598a31"
I0120 14:05:00.619751 1060619 cri.go:89] found id: "02072e0f6c71b7c3c1a28f20c9666d56366fed158817c04f41ef341dd5bb8bad"
I0120 14:05:00.619758 1060619 cri.go:89] found id: ""
I0120 14:05:00.619771 1060619 logs.go:282] 2 containers: [7d57d176b780f0dee8918c4bb2ffdac2adf18c59ca9ce5f212fc40e769598a31 02072e0f6c71b7c3c1a28f20c9666d56366fed158817c04f41ef341dd5bb8bad]
I0120 14:05:00.619848 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:05:00.624825 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:05:00.629551 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0120 14:05:00.629633 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0120 14:05:00.674890 1060619 cri.go:89] found id: "9f7602bf7f00336f9899c8f95fb6cb0da4bb3f24c8461020ea132f5e30bfc77b"
I0120 14:05:00.674937 1060619 cri.go:89] found id: "55a01a00e24810acb396db166bcf4de340b5de70dd8efde4d3df9ea7a41e7512"
I0120 14:05:00.674944 1060619 cri.go:89] found id: ""
I0120 14:05:00.674956 1060619 logs.go:282] 2 containers: [9f7602bf7f00336f9899c8f95fb6cb0da4bb3f24c8461020ea132f5e30bfc77b 55a01a00e24810acb396db166bcf4de340b5de70dd8efde4d3df9ea7a41e7512]
I0120 14:05:00.675029 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:05:00.680286 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:05:00.685334 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0120 14:05:00.685431 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0120 14:05:00.729647 1060619 cri.go:89] found id: "adc3dcfdc282af9201e7826e9fcd6c80a16c529ac58d5176bb90e1cd5b694316"
I0120 14:05:00.729678 1060619 cri.go:89] found id: "41e92150a94ed565f09abffe11a56cfec14b623b375021c8a91497067beb8246"
I0120 14:05:00.729684 1060619 cri.go:89] found id: ""
I0120 14:05:00.729694 1060619 logs.go:282] 2 containers: [adc3dcfdc282af9201e7826e9fcd6c80a16c529ac58d5176bb90e1cd5b694316 41e92150a94ed565f09abffe11a56cfec14b623b375021c8a91497067beb8246]
I0120 14:05:00.729766 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:05:00.734865 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:05:00.740340 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0120 14:05:00.740429 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0120 14:05:00.799061 1060619 cri.go:89] found id: "fe1fbba5fa5fb0856252eca425577134e14324efe099e6728381e4992272e807"
I0120 14:05:00.799094 1060619 cri.go:89] found id: "b5717a9b2f696c83b68c5031409351d9836f275c35603535d2c628cb0907cc3a"
I0120 14:05:00.799101 1060619 cri.go:89] found id: ""
I0120 14:05:00.799111 1060619 logs.go:282] 2 containers: [fe1fbba5fa5fb0856252eca425577134e14324efe099e6728381e4992272e807 b5717a9b2f696c83b68c5031409351d9836f275c35603535d2c628cb0907cc3a]
I0120 14:05:00.799192 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:05:00.803902 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:05:00.808273 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0120 14:05:00.808346 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0120 14:05:00.852747 1060619 cri.go:89] found id: "d17910e47dd842ffca9a44dfce87a74c7177842130b2a885bf690c951756b332"
I0120 14:05:00.852784 1060619 cri.go:89] found id: "690ac9360bae5844259f7057d0e04e44eef56919c5be52122b44ae555dfa5527"
I0120 14:05:00.852790 1060619 cri.go:89] found id: ""
I0120 14:05:00.852803 1060619 logs.go:282] 2 containers: [d17910e47dd842ffca9a44dfce87a74c7177842130b2a885bf690c951756b332 690ac9360bae5844259f7057d0e04e44eef56919c5be52122b44ae555dfa5527]
I0120 14:05:00.852872 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:05:00.858346 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:05:00.863202 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0120 14:05:00.863279 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0120 14:05:00.907450 1060619 cri.go:89] found id: "68ed03cac6c2094aa12bce8699936927c3fdafc446f7c9ad0d7a3a2e8e12a787"
I0120 14:05:00.907474 1060619 cri.go:89] found id: "72e9caf6884b449852b51ae16e54bdc6b548047e61e8508d14e9b206f4813e67"
I0120 14:05:00.907478 1060619 cri.go:89] found id: ""
I0120 14:05:00.907486 1060619 logs.go:282] 2 containers: [68ed03cac6c2094aa12bce8699936927c3fdafc446f7c9ad0d7a3a2e8e12a787 72e9caf6884b449852b51ae16e54bdc6b548047e61e8508d14e9b206f4813e67]
I0120 14:05:00.907542 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:05:00.912507 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:05:00.917120 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0120 14:05:00.917216 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0120 14:05:00.959792 1060619 cri.go:89] found id: ""
I0120 14:05:00.959828 1060619 logs.go:282] 0 containers: []
W0120 14:05:00.959840 1060619 logs.go:284] No container was found matching "kindnet"
I0120 14:05:00.959848 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0120 14:05:00.959923 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0120 14:05:00.999755 1060619 cri.go:89] found id: "19a20bd63c977a83ca20049fb784ae6943714d985533dab65c1289853eee2e7a"
I0120 14:05:00.999785 1060619 cri.go:89] found id: ""
I0120 14:05:00.999794 1060619 logs.go:282] 1 containers: [19a20bd63c977a83ca20049fb784ae6943714d985533dab65c1289853eee2e7a]
I0120 14:05:00.999845 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:05:01.004371 1060619 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0120 14:05:01.004466 1060619 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0120 14:05:01.044946 1060619 cri.go:89] found id: "d304b124b2bae2839b5f866cf991bea6331d185397d6f481f8bb8e19e630dafc"
I0120 14:05:01.044990 1060619 cri.go:89] found id: "1180fd6f5db5f0f6f154c5f767d568e7d23eb7839a012915d033ff5796fde416"
I0120 14:05:01.044997 1060619 cri.go:89] found id: ""
I0120 14:05:01.045007 1060619 logs.go:282] 2 containers: [d304b124b2bae2839b5f866cf991bea6331d185397d6f481f8bb8e19e630dafc 1180fd6f5db5f0f6f154c5f767d568e7d23eb7839a012915d033ff5796fde416]
I0120 14:05:01.045068 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:05:01.050246 1060619 ssh_runner.go:195] Run: which crictl
I0120 14:05:01.055164 1060619 logs.go:123] Gathering logs for etcd [9f7602bf7f00336f9899c8f95fb6cb0da4bb3f24c8461020ea132f5e30bfc77b] ...
I0120 14:05:01.055200 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9f7602bf7f00336f9899c8f95fb6cb0da4bb3f24c8461020ea132f5e30bfc77b"
I0120 14:05:01.108108 1060619 logs.go:123] Gathering logs for coredns [adc3dcfdc282af9201e7826e9fcd6c80a16c529ac58d5176bb90e1cd5b694316] ...
I0120 14:05:01.108153 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adc3dcfdc282af9201e7826e9fcd6c80a16c529ac58d5176bb90e1cd5b694316"
I0120 14:05:01.155209 1060619 logs.go:123] Gathering logs for kube-controller-manager [72e9caf6884b449852b51ae16e54bdc6b548047e61e8508d14e9b206f4813e67] ...
I0120 14:05:01.155242 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72e9caf6884b449852b51ae16e54bdc6b548047e61e8508d14e9b206f4813e67"
I0120 14:05:01.208141 1060619 logs.go:123] Gathering logs for container status ...
I0120 14:05:01.208187 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0120 14:05:01.257057 1060619 logs.go:123] Gathering logs for dmesg ...
I0120 14:05:01.257095 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0120 14:05:01.271460 1060619 logs.go:123] Gathering logs for kubernetes-dashboard [19a20bd63c977a83ca20049fb784ae6943714d985533dab65c1289853eee2e7a] ...
I0120 14:05:01.271495 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 19a20bd63c977a83ca20049fb784ae6943714d985533dab65c1289853eee2e7a"
I0120 14:05:01.315984 1060619 logs.go:123] Gathering logs for containerd ...
I0120 14:05:01.316031 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0120 14:05:01.375729 1060619 logs.go:123] Gathering logs for kube-controller-manager [68ed03cac6c2094aa12bce8699936927c3fdafc446f7c9ad0d7a3a2e8e12a787] ...
I0120 14:05:01.375778 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 68ed03cac6c2094aa12bce8699936927c3fdafc446f7c9ad0d7a3a2e8e12a787"
I0120 14:04:58.591226 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:04:58.591819 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | unable to find current IP address of domain newest-cni-488874 in network mk-newest-cni-488874
I0120 14:04:58.591904 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | I0120 14:04:58.591790 1063195 retry.go:31] will retry after 3.366177049s: waiting for domain to come up
I0120 14:05:01.962611 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:01.963313 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | unable to find current IP address of domain newest-cni-488874 in network mk-newest-cni-488874
I0120 14:05:01.963381 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | I0120 14:05:01.963271 1063195 retry.go:31] will retry after 4.39777174s: waiting for domain to come up
I0120 14:04:59.926968 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:02.425700 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:01.442435 1060619 logs.go:123] Gathering logs for storage-provisioner [d304b124b2bae2839b5f866cf991bea6331d185397d6f481f8bb8e19e630dafc] ...
I0120 14:05:01.442489 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d304b124b2bae2839b5f866cf991bea6331d185397d6f481f8bb8e19e630dafc"
I0120 14:05:01.498316 1060619 logs.go:123] Gathering logs for kubelet ...
I0120 14:05:01.498358 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0120 14:05:01.576794 1060619 logs.go:123] Gathering logs for kube-apiserver [7d57d176b780f0dee8918c4bb2ffdac2adf18c59ca9ce5f212fc40e769598a31] ...
I0120 14:05:01.576853 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d57d176b780f0dee8918c4bb2ffdac2adf18c59ca9ce5f212fc40e769598a31"
I0120 14:05:01.628660 1060619 logs.go:123] Gathering logs for kube-apiserver [02072e0f6c71b7c3c1a28f20c9666d56366fed158817c04f41ef341dd5bb8bad] ...
I0120 14:05:01.628701 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02072e0f6c71b7c3c1a28f20c9666d56366fed158817c04f41ef341dd5bb8bad"
I0120 14:05:01.676023 1060619 logs.go:123] Gathering logs for etcd [55a01a00e24810acb396db166bcf4de340b5de70dd8efde4d3df9ea7a41e7512] ...
I0120 14:05:01.676066 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55a01a00e24810acb396db166bcf4de340b5de70dd8efde4d3df9ea7a41e7512"
I0120 14:05:01.760456 1060619 logs.go:123] Gathering logs for kube-proxy [d17910e47dd842ffca9a44dfce87a74c7177842130b2a885bf690c951756b332] ...
I0120 14:05:01.760505 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d17910e47dd842ffca9a44dfce87a74c7177842130b2a885bf690c951756b332"
I0120 14:05:01.808639 1060619 logs.go:123] Gathering logs for storage-provisioner [1180fd6f5db5f0f6f154c5f767d568e7d23eb7839a012915d033ff5796fde416] ...
I0120 14:05:01.808679 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1180fd6f5db5f0f6f154c5f767d568e7d23eb7839a012915d033ff5796fde416"
I0120 14:05:01.851560 1060619 logs.go:123] Gathering logs for describe nodes ...
I0120 14:05:01.851608 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0120 14:05:01.974027 1060619 logs.go:123] Gathering logs for coredns [41e92150a94ed565f09abffe11a56cfec14b623b375021c8a91497067beb8246] ...
I0120 14:05:01.974068 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 41e92150a94ed565f09abffe11a56cfec14b623b375021c8a91497067beb8246"
I0120 14:05:02.028243 1060619 logs.go:123] Gathering logs for kube-scheduler [fe1fbba5fa5fb0856252eca425577134e14324efe099e6728381e4992272e807] ...
I0120 14:05:02.028282 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe1fbba5fa5fb0856252eca425577134e14324efe099e6728381e4992272e807"
I0120 14:05:02.072145 1060619 logs.go:123] Gathering logs for kube-scheduler [b5717a9b2f696c83b68c5031409351d9836f275c35603535d2c628cb0907cc3a] ...
I0120 14:05:02.072184 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5717a9b2f696c83b68c5031409351d9836f275c35603535d2c628cb0907cc3a"
I0120 14:05:02.132398 1060619 logs.go:123] Gathering logs for kube-proxy [690ac9360bae5844259f7057d0e04e44eef56919c5be52122b44ae555dfa5527] ...
I0120 14:05:02.132439 1060619 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 690ac9360bae5844259f7057d0e04e44eef56919c5be52122b44ae555dfa5527"
I0120 14:05:04.688443 1060619 system_pods.go:59] 8 kube-system pods found
I0120 14:05:04.688485 1060619 system_pods.go:61] "coredns-668d6bf9bc-n6s85" [69154ea8-b8a0-4320-827b-616277a36df3] Running
I0120 14:05:04.688490 1060619 system_pods.go:61] "etcd-no-preload-097312" [1f5692ac-d9be-42f7-bfbb-2bbf06b63811] Running
I0120 14:05:04.688493 1060619 system_pods.go:61] "kube-apiserver-no-preload-097312" [6794a44a-ccbb-4242-819e-27b02589ca1a] Running
I0120 14:05:04.688497 1060619 system_pods.go:61] "kube-controller-manager-no-preload-097312" [272771b0-de01-49a8-902c-fffa5e478bdf] Running
I0120 14:05:04.688500 1060619 system_pods.go:61] "kube-proxy-xnklt" [5a439af8-d69e-40b5-aa33-b04adf773d1f] Running
I0120 14:05:04.688503 1060619 system_pods.go:61] "kube-scheduler-no-preload-097312" [10717848-0d1d-4f1d-9c31-07956ac756db] Running
I0120 14:05:04.688510 1060619 system_pods.go:61] "metrics-server-f79f97bbb-4wzdk" [f224006c-6882-455d-b3e6-45c1a34c5748] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0120 14:05:04.688514 1060619 system_pods.go:61] "storage-provisioner" [a862a893-ccaf-45fb-a349-98804054f044] Running
I0120 14:05:04.688522 1060619 system_pods.go:74] duration metric: took 4.130964895s to wait for pod list to return data ...
I0120 14:05:04.688529 1060619 default_sa.go:34] waiting for default service account to be created ...
I0120 14:05:04.691965 1060619 default_sa.go:45] found service account: "default"
I0120 14:05:04.691998 1060619 default_sa.go:55] duration metric: took 3.462513ms for default service account to be created ...
I0120 14:05:04.692009 1060619 system_pods.go:137] waiting for k8s-apps to be running ...
I0120 14:05:04.697430 1060619 system_pods.go:87] 8 kube-system pods found
I0120 14:05:04.700108 1060619 system_pods.go:105] "coredns-668d6bf9bc-n6s85" [69154ea8-b8a0-4320-827b-616277a36df3] Running
I0120 14:05:04.700127 1060619 system_pods.go:105] "etcd-no-preload-097312" [1f5692ac-d9be-42f7-bfbb-2bbf06b63811] Running
I0120 14:05:04.700134 1060619 system_pods.go:105] "kube-apiserver-no-preload-097312" [6794a44a-ccbb-4242-819e-27b02589ca1a] Running
I0120 14:05:04.700139 1060619 system_pods.go:105] "kube-controller-manager-no-preload-097312" [272771b0-de01-49a8-902c-fffa5e478bdf] Running
I0120 14:05:04.700143 1060619 system_pods.go:105] "kube-proxy-xnklt" [5a439af8-d69e-40b5-aa33-b04adf773d1f] Running
I0120 14:05:04.700148 1060619 system_pods.go:105] "kube-scheduler-no-preload-097312" [10717848-0d1d-4f1d-9c31-07956ac756db] Running
I0120 14:05:04.700155 1060619 system_pods.go:105] "metrics-server-f79f97bbb-4wzdk" [f224006c-6882-455d-b3e6-45c1a34c5748] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0120 14:05:04.700159 1060619 system_pods.go:105] "storage-provisioner" [a862a893-ccaf-45fb-a349-98804054f044] Running
I0120 14:05:04.700169 1060619 system_pods.go:147] duration metric: took 8.153945ms to wait for k8s-apps to be running ...
I0120 14:05:04.700179 1060619 system_svc.go:44] waiting for kubelet service to be running ....
I0120 14:05:04.700240 1060619 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0120 14:05:04.716658 1060619 system_svc.go:56] duration metric: took 16.464364ms WaitForService to wait for kubelet
I0120 14:05:04.716694 1060619 kubeadm.go:582] duration metric: took 4m27.518718562s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0120 14:05:04.716715 1060619 node_conditions.go:102] verifying NodePressure condition ...
I0120 14:05:04.720144 1060619 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0120 14:05:04.720183 1060619 node_conditions.go:123] node cpu capacity is 2
I0120 14:05:04.720205 1060619 node_conditions.go:105] duration metric: took 3.486041ms to run NodePressure ...
I0120 14:05:04.720220 1060619 start.go:241] waiting for startup goroutines ...
I0120 14:05:04.720227 1060619 start.go:246] waiting for cluster config update ...
I0120 14:05:04.720238 1060619 start.go:255] writing updated cluster config ...
I0120 14:05:04.720581 1060619 ssh_runner.go:195] Run: rm -f paused
I0120 14:05:04.773678 1060619 start.go:600] kubectl: 1.32.1, cluster: 1.32.0 (minor skew: 0)
I0120 14:05:04.775933 1060619 out.go:177] * Done! kubectl is now configured to use "no-preload-097312" cluster and "default" namespace by default
I0120 14:05:00.367543 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:02.867886 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:04.870609 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:06.365969 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:06.366715 1063160 main.go:141] libmachine: (newest-cni-488874) found domain IP: 192.168.50.166
I0120 14:05:06.366743 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has current primary IP address 192.168.50.166 and MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:06.366751 1063160 main.go:141] libmachine: (newest-cni-488874) reserving static IP address...
I0120 14:05:06.367368 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | found host DHCP lease matching {name: "newest-cni-488874", mac: "52:54:00:01:cb:b8", ip: "192.168.50.166"} in network mk-newest-cni-488874: {Iface:virbr2 ExpiryTime:2025-01-20 15:04:59 +0000 UTC Type:0 Mac:52:54:00:01:cb:b8 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:newest-cni-488874 Clientid:01:52:54:00:01:cb:b8}
I0120 14:05:06.367396 1063160 main.go:141] libmachine: (newest-cni-488874) reserved static IP address 192.168.50.166 for domain newest-cni-488874
I0120 14:05:06.367422 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | skip adding static IP to network mk-newest-cni-488874 - found existing host DHCP lease matching {name: "newest-cni-488874", mac: "52:54:00:01:cb:b8", ip: "192.168.50.166"}
I0120 14:05:06.367441 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | Getting to WaitForSSH function...
I0120 14:05:06.367475 1063160 main.go:141] libmachine: (newest-cni-488874) waiting for SSH...
I0120 14:05:06.369915 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:06.370396 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:cb:b8", ip: ""} in network mk-newest-cni-488874: {Iface:virbr2 ExpiryTime:2025-01-20 15:04:59 +0000 UTC Type:0 Mac:52:54:00:01:cb:b8 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:newest-cni-488874 Clientid:01:52:54:00:01:cb:b8}
I0120 14:05:06.370436 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined IP address 192.168.50.166 and MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:06.370661 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | Using SSH client type: external
I0120 14:05:06.370702 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | Using SSH private key: /home/jenkins/minikube-integration/20242-998973/.minikube/machines/newest-cni-488874/id_rsa (-rw-------)
I0120 14:05:06.370734 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.166 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20242-998973/.minikube/machines/newest-cni-488874/id_rsa -p 22] /usr/bin/ssh <nil>}
I0120 14:05:06.370751 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | About to run SSH command:
I0120 14:05:06.370765 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | exit 0
I0120 14:05:06.497942 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | SSH cmd err, output: <nil>:
I0120 14:05:06.498433 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetConfigRaw
I0120 14:05:06.499140 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetIP
I0120 14:05:06.502365 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:06.502778 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:cb:b8", ip: ""} in network mk-newest-cni-488874: {Iface:virbr2 ExpiryTime:2025-01-20 15:04:59 +0000 UTC Type:0 Mac:52:54:00:01:cb:b8 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:newest-cni-488874 Clientid:01:52:54:00:01:cb:b8}
I0120 14:05:06.502860 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined IP address 192.168.50.166 and MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:06.503147 1063160 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/newest-cni-488874/config.json ...
I0120 14:05:06.503544 1063160 machine.go:93] provisionDockerMachine start ...
I0120 14:05:06.503577 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .DriverName
I0120 14:05:06.503843 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHHostname
I0120 14:05:06.506590 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:06.507108 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:cb:b8", ip: ""} in network mk-newest-cni-488874: {Iface:virbr2 ExpiryTime:2025-01-20 15:04:59 +0000 UTC Type:0 Mac:52:54:00:01:cb:b8 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:newest-cni-488874 Clientid:01:52:54:00:01:cb:b8}
I0120 14:05:06.507143 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined IP address 192.168.50.166 and MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:06.507356 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHPort
I0120 14:05:06.507593 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHKeyPath
I0120 14:05:06.507757 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHKeyPath
I0120 14:05:06.507886 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHUsername
I0120 14:05:06.508072 1063160 main.go:141] libmachine: Using SSH client type: native
I0120 14:05:06.508364 1063160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.50.166 22 <nil> <nil>}
I0120 14:05:06.508383 1063160 main.go:141] libmachine: About to run SSH command:
hostname
I0120 14:05:06.617955 1063160 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0120 14:05:06.617985 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetMachineName
I0120 14:05:06.618222 1063160 buildroot.go:166] provisioning hostname "newest-cni-488874"
I0120 14:05:06.618235 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetMachineName
I0120 14:05:06.618406 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHHostname
I0120 14:05:06.621376 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:06.621821 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:cb:b8", ip: ""} in network mk-newest-cni-488874: {Iface:virbr2 ExpiryTime:2025-01-20 15:04:59 +0000 UTC Type:0 Mac:52:54:00:01:cb:b8 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:newest-cni-488874 Clientid:01:52:54:00:01:cb:b8}
I0120 14:05:06.621848 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined IP address 192.168.50.166 and MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:06.622132 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHPort
I0120 14:05:06.622353 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHKeyPath
I0120 14:05:06.622542 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHKeyPath
I0120 14:05:06.622802 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHUsername
I0120 14:05:06.623048 1063160 main.go:141] libmachine: Using SSH client type: native
I0120 14:05:06.623283 1063160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.50.166 22 <nil> <nil>}
I0120 14:05:06.623305 1063160 main.go:141] libmachine: About to run SSH command:
sudo hostname newest-cni-488874 && echo "newest-cni-488874" | sudo tee /etc/hostname
I0120 14:05:06.743983 1063160 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-488874
I0120 14:05:06.744012 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHHostname
I0120 14:05:06.747395 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:06.747789 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:cb:b8", ip: ""} in network mk-newest-cni-488874: {Iface:virbr2 ExpiryTime:2025-01-20 15:04:59 +0000 UTC Type:0 Mac:52:54:00:01:cb:b8 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:newest-cni-488874 Clientid:01:52:54:00:01:cb:b8}
I0120 14:05:06.747822 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined IP address 192.168.50.166 and MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:06.748024 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHPort
I0120 14:05:06.748243 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHKeyPath
I0120 14:05:06.748471 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHKeyPath
I0120 14:05:06.748646 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHUsername
I0120 14:05:06.748824 1063160 main.go:141] libmachine: Using SSH client type: native
I0120 14:05:06.749137 1063160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.50.166 22 <nil> <nil>}
I0120 14:05:06.749160 1063160 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\snewest-cni-488874' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-488874/g' /etc/hosts;
else
echo '127.0.1.1 newest-cni-488874' | sudo tee -a /etc/hosts;
fi
fi
I0120 14:05:06.864413 1063160 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0120 14:05:06.864448 1063160 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20242-998973/.minikube CaCertPath:/home/jenkins/minikube-integration/20242-998973/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20242-998973/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20242-998973/.minikube}
I0120 14:05:06.864468 1063160 buildroot.go:174] setting up certificates
I0120 14:05:06.864479 1063160 provision.go:84] configureAuth start
I0120 14:05:06.864489 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetMachineName
I0120 14:05:06.864804 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetIP
I0120 14:05:06.867729 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:06.868082 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:cb:b8", ip: ""} in network mk-newest-cni-488874: {Iface:virbr2 ExpiryTime:2025-01-20 15:04:59 +0000 UTC Type:0 Mac:52:54:00:01:cb:b8 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:newest-cni-488874 Clientid:01:52:54:00:01:cb:b8}
I0120 14:05:06.868115 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined IP address 192.168.50.166 and MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:06.868340 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHHostname
I0120 14:05:06.870939 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:06.871411 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:cb:b8", ip: ""} in network mk-newest-cni-488874: {Iface:virbr2 ExpiryTime:2025-01-20 15:04:59 +0000 UTC Type:0 Mac:52:54:00:01:cb:b8 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:newest-cni-488874 Clientid:01:52:54:00:01:cb:b8}
I0120 14:05:06.871441 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined IP address 192.168.50.166 and MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:06.871576 1063160 provision.go:143] copyHostCerts
I0120 14:05:06.871647 1063160 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-998973/.minikube/ca.pem, removing ...
I0120 14:05:06.871668 1063160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-998973/.minikube/ca.pem
I0120 14:05:06.871737 1063160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-998973/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20242-998973/.minikube/ca.pem (1082 bytes)
I0120 14:05:06.871841 1063160 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-998973/.minikube/cert.pem, removing ...
I0120 14:05:06.871850 1063160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-998973/.minikube/cert.pem
I0120 14:05:06.871886 1063160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-998973/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20242-998973/.minikube/cert.pem (1123 bytes)
I0120 14:05:06.871962 1063160 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-998973/.minikube/key.pem, removing ...
I0120 14:05:06.871969 1063160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-998973/.minikube/key.pem
I0120 14:05:06.871996 1063160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-998973/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20242-998973/.minikube/key.pem (1675 bytes)
I0120 14:05:06.872059 1063160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20242-998973/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20242-998973/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20242-998973/.minikube/certs/ca-key.pem org=jenkins.newest-cni-488874 san=[127.0.0.1 192.168.50.166 localhost minikube newest-cni-488874]
I0120 14:05:06.934937 1063160 provision.go:177] copyRemoteCerts
I0120 14:05:06.934999 1063160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0120 14:05:06.935043 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHHostname
I0120 14:05:06.938241 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:06.938542 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:cb:b8", ip: ""} in network mk-newest-cni-488874: {Iface:virbr2 ExpiryTime:2025-01-20 15:04:59 +0000 UTC Type:0 Mac:52:54:00:01:cb:b8 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:newest-cni-488874 Clientid:01:52:54:00:01:cb:b8}
I0120 14:05:06.938570 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined IP address 192.168.50.166 and MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:06.938812 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHPort
I0120 14:05:06.938991 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHKeyPath
I0120 14:05:06.939188 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHUsername
I0120 14:05:06.939330 1063160 sshutil.go:53] new ssh client: &{IP:192.168.50.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/newest-cni-488874/id_rsa Username:docker}
I0120 14:05:07.032002 1063160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0120 14:05:04.925140 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:06.925415 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:08.925905 1060798 pod_ready.go:103] pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:07.061467 1063160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0120 14:05:07.089322 1063160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I0120 14:05:07.116452 1063160 provision.go:87] duration metric: took 251.958223ms to configureAuth
I0120 14:05:07.116486 1063160 buildroot.go:189] setting minikube options for container-runtime
I0120 14:05:07.116712 1063160 config.go:182] Loaded profile config "newest-cni-488874": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 14:05:07.116729 1063160 machine.go:96] duration metric: took 613.164362ms to provisionDockerMachine
I0120 14:05:07.116742 1063160 start.go:293] postStartSetup for "newest-cni-488874" (driver="kvm2")
I0120 14:05:07.116756 1063160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0120 14:05:07.116795 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .DriverName
I0120 14:05:07.117251 1063160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0120 14:05:07.117292 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHHostname
I0120 14:05:07.120232 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:07.120713 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:cb:b8", ip: ""} in network mk-newest-cni-488874: {Iface:virbr2 ExpiryTime:2025-01-20 15:04:59 +0000 UTC Type:0 Mac:52:54:00:01:cb:b8 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:newest-cni-488874 Clientid:01:52:54:00:01:cb:b8}
I0120 14:05:07.120748 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined IP address 192.168.50.166 and MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:07.120914 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHPort
I0120 14:05:07.121122 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHKeyPath
I0120 14:05:07.121323 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHUsername
I0120 14:05:07.121518 1063160 sshutil.go:53] new ssh client: &{IP:192.168.50.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/newest-cni-488874/id_rsa Username:docker}
I0120 14:05:07.203944 1063160 ssh_runner.go:195] Run: cat /etc/os-release
I0120 14:05:07.208749 1063160 info.go:137] Remote host: Buildroot 2023.02.9
I0120 14:05:07.208779 1063160 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-998973/.minikube/addons for local assets ...
I0120 14:05:07.208840 1063160 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-998973/.minikube/files for local assets ...
I0120 14:05:07.208922 1063160 filesync.go:149] local asset: /home/jenkins/minikube-integration/20242-998973/.minikube/files/etc/ssl/certs/10062632.pem -> 10062632.pem in /etc/ssl/certs
I0120 14:05:07.209070 1063160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0120 14:05:07.219151 1063160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/files/etc/ssl/certs/10062632.pem --> /etc/ssl/certs/10062632.pem (1708 bytes)
I0120 14:05:07.247592 1063160 start.go:296] duration metric: took 130.829742ms for postStartSetup
I0120 14:05:07.247660 1063160 fix.go:56] duration metric: took 20.079818838s for fixHost
I0120 14:05:07.247693 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHHostname
I0120 14:05:07.250441 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:07.250887 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:cb:b8", ip: ""} in network mk-newest-cni-488874: {Iface:virbr2 ExpiryTime:2025-01-20 15:04:59 +0000 UTC Type:0 Mac:52:54:00:01:cb:b8 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:newest-cni-488874 Clientid:01:52:54:00:01:cb:b8}
I0120 14:05:07.250933 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined IP address 192.168.50.166 and MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:07.251219 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHPort
I0120 14:05:07.251458 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHKeyPath
I0120 14:05:07.251656 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHKeyPath
I0120 14:05:07.251876 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHUsername
I0120 14:05:07.252078 1063160 main.go:141] libmachine: Using SSH client type: native
I0120 14:05:07.252282 1063160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.50.166 22 <nil> <nil>}
I0120 14:05:07.252292 1063160 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0120 14:05:07.358734 1063160 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737381907.331379316
I0120 14:05:07.358760 1063160 fix.go:216] guest clock: 1737381907.331379316
I0120 14:05:07.358768 1063160 fix.go:229] Guest: 2025-01-20 14:05:07.331379316 +0000 UTC Remote: 2025-01-20 14:05:07.247665057 +0000 UTC m=+20.241792947 (delta=83.714259ms)
I0120 14:05:07.358792 1063160 fix.go:200] guest clock delta is within tolerance: 83.714259ms
I0120 14:05:07.358800 1063160 start.go:83] releasing machines lock for "newest-cni-488874", held for 20.190993038s
I0120 14:05:07.358825 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .DriverName
I0120 14:05:07.359172 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetIP
I0120 14:05:07.361973 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:07.362383 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:cb:b8", ip: ""} in network mk-newest-cni-488874: {Iface:virbr2 ExpiryTime:2025-01-20 15:04:59 +0000 UTC Type:0 Mac:52:54:00:01:cb:b8 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:newest-cni-488874 Clientid:01:52:54:00:01:cb:b8}
I0120 14:05:07.362417 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined IP address 192.168.50.166 and MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:07.362637 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .DriverName
I0120 14:05:07.363168 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .DriverName
I0120 14:05:07.363391 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .DriverName
I0120 14:05:07.363523 1063160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0120 14:05:07.363572 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHHostname
I0120 14:05:07.363632 1063160 ssh_runner.go:195] Run: cat /version.json
I0120 14:05:07.363664 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHHostname
I0120 14:05:07.367042 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:07.367317 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:07.367442 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:cb:b8", ip: ""} in network mk-newest-cni-488874: {Iface:virbr2 ExpiryTime:2025-01-20 15:04:59 +0000 UTC Type:0 Mac:52:54:00:01:cb:b8 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:newest-cni-488874 Clientid:01:52:54:00:01:cb:b8}
I0120 14:05:07.367578 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined IP address 192.168.50.166 and MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:07.367611 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHPort
I0120 14:05:07.367813 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHKeyPath
I0120 14:05:07.367922 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:cb:b8", ip: ""} in network mk-newest-cni-488874: {Iface:virbr2 ExpiryTime:2025-01-20 15:04:59 +0000 UTC Type:0 Mac:52:54:00:01:cb:b8 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:newest-cni-488874 Clientid:01:52:54:00:01:cb:b8}
I0120 14:05:07.367948 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined IP address 192.168.50.166 and MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:07.367966 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHUsername
I0120 14:05:07.368128 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHPort
I0120 14:05:07.368161 1063160 sshutil.go:53] new ssh client: &{IP:192.168.50.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/newest-cni-488874/id_rsa Username:docker}
I0120 14:05:07.368279 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHKeyPath
I0120 14:05:07.368454 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHUsername
I0120 14:05:07.368654 1063160 sshutil.go:53] new ssh client: &{IP:192.168.50.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/newest-cni-488874/id_rsa Username:docker}
I0120 14:05:07.478493 1063160 ssh_runner.go:195] Run: systemctl --version
I0120 14:05:07.485765 1063160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0120 14:05:07.494763 1063160 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0120 14:05:07.494869 1063160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0120 14:05:07.517499 1063160 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0120 14:05:07.517538 1063160 start.go:495] detecting cgroup driver to use...
I0120 14:05:07.517617 1063160 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0120 14:05:07.549661 1063160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0120 14:05:07.566559 1063160 docker.go:217] disabling cri-docker service (if available) ...
I0120 14:05:07.566632 1063160 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0120 14:05:07.582210 1063160 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0120 14:05:07.597548 1063160 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0120 14:05:07.716948 1063160 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0120 14:05:07.905168 1063160 docker.go:233] disabling docker service ...
I0120 14:05:07.905273 1063160 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0120 14:05:07.921341 1063160 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0120 14:05:07.939537 1063160 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0120 14:05:08.082338 1063160 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0120 14:05:08.215419 1063160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0120 14:05:08.231001 1063160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0120 14:05:08.252949 1063160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0120 14:05:08.264709 1063160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0120 14:05:08.276797 1063160 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0120 14:05:08.276871 1063160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0120 14:05:08.290184 1063160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0120 14:05:08.302267 1063160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0120 14:05:08.314508 1063160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0120 14:05:08.326383 1063160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0120 14:05:08.340055 1063160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0120 14:05:08.351978 1063160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0120 14:05:08.365499 1063160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0120 14:05:08.378256 1063160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0120 14:05:08.388926 1063160 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0120 14:05:08.389066 1063160 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0120 14:05:08.404028 1063160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0120 14:05:08.414646 1063160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0120 14:05:08.552547 1063160 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0120 14:05:08.586170 1063160 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0120 14:05:08.586254 1063160 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0120 14:05:08.591476 1063160 retry.go:31] will retry after 1.288149502s: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0120 14:05:09.881095 1063160 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0120 14:05:09.887272 1063160 start.go:563] Will wait 60s for crictl version
I0120 14:05:09.887354 1063160 ssh_runner.go:195] Run: which crictl
I0120 14:05:09.892059 1063160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0120 14:05:09.937510 1063160 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.7.23
RuntimeApiVersion: v1
I0120 14:05:09.937590 1063160 ssh_runner.go:195] Run: containerd --version
I0120 14:05:09.970847 1063160 ssh_runner.go:195] Run: containerd --version
I0120 14:05:10.000771 1063160 out.go:177] * Preparing Kubernetes v1.32.0 on containerd 1.7.23 ...
I0120 14:05:10.002363 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetIP
I0120 14:05:10.005275 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:10.005716 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:cb:b8", ip: ""} in network mk-newest-cni-488874: {Iface:virbr2 ExpiryTime:2025-01-20 15:04:59 +0000 UTC Type:0 Mac:52:54:00:01:cb:b8 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:newest-cni-488874 Clientid:01:52:54:00:01:cb:b8}
I0120 14:05:10.005747 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined IP address 192.168.50.166 and MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:10.006008 1063160 ssh_runner.go:195] Run: grep 192.168.50.1 host.minikube.internal$ /etc/hosts
I0120 14:05:10.011138 1063160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0120 14:05:10.027519 1063160 out.go:177] - kubeadm.pod-network-cidr=10.42.0.0/16
I0120 14:05:07.369190 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:09.867683 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:10.029161 1063160 kubeadm.go:883] updating cluster {Name:newest-cni-488874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:newest-cni-488874 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.166 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Networ
k: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0120 14:05:10.029378 1063160 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
I0120 14:05:10.029484 1063160 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 14:05:10.069810 1063160 containerd.go:627] all images are preloaded for containerd runtime.
I0120 14:05:10.069842 1063160 containerd.go:534] Images already preloaded, skipping extraction
I0120 14:05:10.069913 1063160 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 14:05:10.108630 1063160 containerd.go:627] all images are preloaded for containerd runtime.
I0120 14:05:10.108657 1063160 cache_images.go:84] Images are preloaded, skipping loading
I0120 14:05:10.108667 1063160 kubeadm.go:934] updating node { 192.168.50.166 8443 v1.32.0 containerd true true} ...
I0120 14:05:10.108787 1063160 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-488874 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.166
[Install]
config:
{KubernetesVersion:v1.32.0 ClusterName:newest-cni-488874 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0120 14:05:10.108847 1063160 ssh_runner.go:195] Run: sudo crictl info
I0120 14:05:10.145581 1063160 cni.go:84] Creating CNI manager for ""
I0120 14:05:10.145612 1063160 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0120 14:05:10.145629 1063160 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
I0120 14:05:10.145661 1063160 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.166 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-488874 NodeName:newest-cni-488874 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.166"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.166 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0120 14:05:10.145821 1063160 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.50.166
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "newest-cni-488874"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.50.166"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.50.166"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.0
networking:
dnsDomain: cluster.local
podSubnet: "10.42.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.42.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0120 14:05:10.145921 1063160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
I0120 14:05:10.158654 1063160 binaries.go:44] Found k8s binaries, skipping transfer
I0120 14:05:10.158759 1063160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0120 14:05:10.169232 1063160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
I0120 14:05:10.188001 1063160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0120 14:05:10.208552 1063160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2311 bytes)
I0120 14:05:10.228712 1063160 ssh_runner.go:195] Run: grep 192.168.50.166 control-plane.minikube.internal$ /etc/hosts
I0120 14:05:10.233325 1063160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.166 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0120 14:05:10.247712 1063160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0120 14:05:10.372513 1063160 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0120 14:05:10.395357 1063160 certs.go:68] Setting up /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/newest-cni-488874 for IP: 192.168.50.166
I0120 14:05:10.395381 1063160 certs.go:194] generating shared ca certs ...
I0120 14:05:10.395397 1063160 certs.go:226] acquiring lock for ca certs: {Name:mk3b53704e4ec52de26582ed9269b5c3b0eb7914 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 14:05:10.395563 1063160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20242-998973/.minikube/ca.key
I0120 14:05:10.395622 1063160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20242-998973/.minikube/proxy-client-ca.key
I0120 14:05:10.395634 1063160 certs.go:256] generating profile certs ...
I0120 14:05:10.395725 1063160 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/newest-cni-488874/client.key
I0120 14:05:10.395793 1063160 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/newest-cni-488874/apiserver.key.2d5efe46
I0120 14:05:10.395840 1063160 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/newest-cni-488874/proxy-client.key
I0120 14:05:10.396009 1063160 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-998973/.minikube/certs/1006263.pem (1338 bytes)
W0120 14:05:10.396059 1063160 certs.go:480] ignoring /home/jenkins/minikube-integration/20242-998973/.minikube/certs/1006263_empty.pem, impossibly tiny 0 bytes
I0120 14:05:10.396065 1063160 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-998973/.minikube/certs/ca-key.pem (1675 bytes)
I0120 14:05:10.396168 1063160 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-998973/.minikube/certs/ca.pem (1082 bytes)
I0120 14:05:10.396209 1063160 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-998973/.minikube/certs/cert.pem (1123 bytes)
I0120 14:05:10.396263 1063160 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-998973/.minikube/certs/key.pem (1675 bytes)
I0120 14:05:10.396327 1063160 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-998973/.minikube/files/etc/ssl/certs/10062632.pem (1708 bytes)
I0120 14:05:10.397217 1063160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0120 14:05:10.438100 1063160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0120 14:05:10.470318 1063160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0120 14:05:10.503429 1063160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0120 14:05:10.548514 1063160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/newest-cni-488874/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I0120 14:05:10.591209 1063160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/newest-cni-488874/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0120 14:05:10.620013 1063160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/newest-cni-488874/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0120 14:05:10.654243 1063160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/profiles/newest-cni-488874/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0120 14:05:10.682296 1063160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/files/etc/ssl/certs/10062632.pem --> /usr/share/ca-certificates/10062632.pem (1708 bytes)
I0120 14:05:10.711242 1063160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0120 14:05:10.740118 1063160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-998973/.minikube/certs/1006263.pem --> /usr/share/ca-certificates/1006263.pem (1338 bytes)
I0120 14:05:10.769557 1063160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0120 14:05:10.790416 1063160 ssh_runner.go:195] Run: openssl version
I0120 14:05:10.798858 1063160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1006263.pem && ln -fs /usr/share/ca-certificates/1006263.pem /etc/ssl/certs/1006263.pem"
I0120 14:05:10.812120 1063160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1006263.pem
I0120 14:05:10.818021 1063160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 12:56 /usr/share/ca-certificates/1006263.pem
I0120 14:05:10.818106 1063160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1006263.pem
I0120 14:05:10.825236 1063160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1006263.pem /etc/ssl/certs/51391683.0"
I0120 14:05:10.837376 1063160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10062632.pem && ln -fs /usr/share/ca-certificates/10062632.pem /etc/ssl/certs/10062632.pem"
I0120 14:05:10.851234 1063160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10062632.pem
I0120 14:05:10.856673 1063160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 12:56 /usr/share/ca-certificates/10062632.pem
I0120 14:05:10.856762 1063160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10062632.pem
I0120 14:05:10.863757 1063160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10062632.pem /etc/ssl/certs/3ec20f2e.0"
I0120 14:05:10.876948 1063160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0120 14:05:10.889955 1063160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0120 14:05:10.895521 1063160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 12:48 /usr/share/ca-certificates/minikubeCA.pem
I0120 14:05:10.895628 1063160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0120 14:05:10.902527 1063160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0120 14:05:10.915727 1063160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0120 14:05:10.921530 1063160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0120 14:05:10.928703 1063160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0120 14:05:10.936028 1063160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0120 14:05:10.943185 1063160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0120 14:05:10.950536 1063160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0120 14:05:10.957927 1063160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0120 14:05:10.965037 1063160 kubeadm.go:392] StartCluster: {Name:newest-cni-488874 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:newest-cni-488874 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.166 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0120 14:05:10.965163 1063160 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0120 14:05:10.965237 1063160 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0120 14:05:11.016895 1063160 cri.go:89] found id: "95be8b17de43b5aa6b68a36c754c80e8d62bd647fb91f0e7c71481244235e460"
I0120 14:05:11.016941 1063160 cri.go:89] found id: "8b8e79e267ec6846094bf129f754c316fa64f873658a465c224ea763b924d6ca"
I0120 14:05:11.016950 1063160 cri.go:89] found id: "d19a5d3151a0febe9ecb65ccce412de85fc9ece5c227409fb231a5b88bede145"
I0120 14:05:11.016984 1063160 cri.go:89] found id: "6d63815cd6e98958f31f54f7664461dd92453e37f879e555306cd84dd0d6cc74"
I0120 14:05:11.017004 1063160 cri.go:89] found id: "f804fb6624506cef920ec1119d3d52222216ca492ff6742b1c7bd2a306f3f3c9"
I0120 14:05:11.017015 1063160 cri.go:89] found id: "051033fb791c4ccaa103881d621bb050a451c1619be1f52d15846b09499aa578"
I0120 14:05:11.017023 1063160 cri.go:89] found id: "00fb987b20977c8655fb2a021c80ea212ed1570a7f6b5708ffd22f47540bce20"
I0120 14:05:11.017028 1063160 cri.go:89] found id: "6310c66356a530a02ba34dd663abb471277506886eb4b0f05251994e9f8955fb"
I0120 14:05:11.017032 1063160 cri.go:89] found id: ""
I0120 14:05:11.017100 1063160 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
W0120 14:05:11.034091 1063160 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-01-20T14:05:11Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I0120 14:05:11.034236 1063160 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0120 14:05:11.046808 1063160 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0120 14:05:11.046831 1063160 kubeadm.go:593] restartPrimaryControlPlane start ...
I0120 14:05:11.046883 1063160 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0120 14:05:11.059273 1063160 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0120 14:05:11.060135 1063160 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-488874" does not appear in /home/jenkins/minikube-integration/20242-998973/kubeconfig
I0120 14:05:11.060560 1063160 kubeconfig.go:62] /home/jenkins/minikube-integration/20242-998973/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-488874" cluster setting kubeconfig missing "newest-cni-488874" context setting]
I0120 14:05:11.061427 1063160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-998973/kubeconfig: {Name:mkc416e4f6e76f39025eb204e9812d9900c83215 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 14:05:11.063136 1063160 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0120 14:05:11.076655 1063160 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.166
I0120 14:05:11.076709 1063160 kubeadm.go:1160] stopping kube-system containers ...
I0120 14:05:11.076734 1063160 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I0120 14:05:11.076801 1063160 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0120 14:05:11.128107 1063160 cri.go:89] found id: "95be8b17de43b5aa6b68a36c754c80e8d62bd647fb91f0e7c71481244235e460"
I0120 14:05:11.128141 1063160 cri.go:89] found id: "8b8e79e267ec6846094bf129f754c316fa64f873658a465c224ea763b924d6ca"
I0120 14:05:11.128147 1063160 cri.go:89] found id: "d19a5d3151a0febe9ecb65ccce412de85fc9ece5c227409fb231a5b88bede145"
I0120 14:05:11.128153 1063160 cri.go:89] found id: "6d63815cd6e98958f31f54f7664461dd92453e37f879e555306cd84dd0d6cc74"
I0120 14:05:11.128157 1063160 cri.go:89] found id: "f804fb6624506cef920ec1119d3d52222216ca492ff6742b1c7bd2a306f3f3c9"
I0120 14:05:11.128163 1063160 cri.go:89] found id: "051033fb791c4ccaa103881d621bb050a451c1619be1f52d15846b09499aa578"
I0120 14:05:11.128167 1063160 cri.go:89] found id: "00fb987b20977c8655fb2a021c80ea212ed1570a7f6b5708ffd22f47540bce20"
I0120 14:05:11.128171 1063160 cri.go:89] found id: "6310c66356a530a02ba34dd663abb471277506886eb4b0f05251994e9f8955fb"
I0120 14:05:11.128175 1063160 cri.go:89] found id: ""
I0120 14:05:11.128183 1063160 cri.go:252] Stopping containers: [95be8b17de43b5aa6b68a36c754c80e8d62bd647fb91f0e7c71481244235e460 8b8e79e267ec6846094bf129f754c316fa64f873658a465c224ea763b924d6ca d19a5d3151a0febe9ecb65ccce412de85fc9ece5c227409fb231a5b88bede145 6d63815cd6e98958f31f54f7664461dd92453e37f879e555306cd84dd0d6cc74 f804fb6624506cef920ec1119d3d52222216ca492ff6742b1c7bd2a306f3f3c9 051033fb791c4ccaa103881d621bb050a451c1619be1f52d15846b09499aa578 00fb987b20977c8655fb2a021c80ea212ed1570a7f6b5708ffd22f47540bce20 6310c66356a530a02ba34dd663abb471277506886eb4b0f05251994e9f8955fb]
I0120 14:05:11.128278 1063160 ssh_runner.go:195] Run: which crictl
I0120 14:05:11.132849 1063160 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 95be8b17de43b5aa6b68a36c754c80e8d62bd647fb91f0e7c71481244235e460 8b8e79e267ec6846094bf129f754c316fa64f873658a465c224ea763b924d6ca d19a5d3151a0febe9ecb65ccce412de85fc9ece5c227409fb231a5b88bede145 6d63815cd6e98958f31f54f7664461dd92453e37f879e555306cd84dd0d6cc74 f804fb6624506cef920ec1119d3d52222216ca492ff6742b1c7bd2a306f3f3c9 051033fb791c4ccaa103881d621bb050a451c1619be1f52d15846b09499aa578 00fb987b20977c8655fb2a021c80ea212ed1570a7f6b5708ffd22f47540bce20 6310c66356a530a02ba34dd663abb471277506886eb4b0f05251994e9f8955fb
I0120 14:05:11.182117 1063160 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0120 14:05:11.202340 1063160 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0120 14:05:11.216641 1063160 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0120 14:05:11.216665 1063160 kubeadm.go:157] found existing configuration files:
I0120 14:05:11.216712 1063160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0120 14:05:11.227893 1063160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0120 14:05:11.227979 1063160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0120 14:05:11.239065 1063160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0120 14:05:11.250423 1063160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0120 14:05:11.250491 1063160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0120 14:05:11.261814 1063160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0120 14:05:11.272846 1063160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0120 14:05:11.272913 1063160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0120 14:05:11.284218 1063160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0120 14:05:11.294670 1063160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0120 14:05:11.294762 1063160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0120 14:05:11.306384 1063160 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0120 14:05:11.318728 1063160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0120 14:05:11.491305 1063160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0120 14:05:10.918089 1060798 pod_ready.go:82] duration metric: took 4m0.000161453s for pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace to be "Ready" ...
E0120 14:05:10.918131 1060798 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-5mwxz" in "kube-system" namespace to be "Ready" (will not retry!)
I0120 14:05:10.918160 1060798 pod_ready.go:39] duration metric: took 4m13.053682746s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0120 14:05:10.918201 1060798 kubeadm.go:597] duration metric: took 4m21.286948978s to restartPrimaryControlPlane
W0120 14:05:10.918306 1060798 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
I0120 14:05:10.918352 1060798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0120 14:05:12.920615 1060798 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.002231911s)
I0120 14:05:12.920701 1060798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0120 14:05:12.942116 1060798 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0120 14:05:12.954775 1060798 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0120 14:05:12.966775 1060798 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0120 14:05:12.966807 1060798 kubeadm.go:157] found existing configuration files:
I0120 14:05:12.966883 1060798 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0120 14:05:12.977602 1060798 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0120 14:05:12.977684 1060798 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0120 14:05:12.989019 1060798 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0120 14:05:13.000820 1060798 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0120 14:05:13.000898 1060798 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0120 14:05:13.016644 1060798 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0120 14:05:13.031439 1060798 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0120 14:05:13.031528 1060798 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0120 14:05:13.042457 1060798 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0120 14:05:13.055593 1060798 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0120 14:05:13.055669 1060798 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0120 14:05:13.068674 1060798 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0120 14:05:13.130131 1060798 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
I0120 14:05:13.130201 1060798 kubeadm.go:310] [preflight] Running pre-flight checks
I0120 14:05:13.252056 1060798 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0120 14:05:13.252208 1060798 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0120 14:05:13.252350 1060798 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0120 14:05:13.262351 1060798 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0120 14:05:13.264231 1060798 out.go:235] - Generating certificates and keys ...
I0120 14:05:13.264325 1060798 kubeadm.go:310] [certs] Using existing ca certificate authority
I0120 14:05:13.264382 1060798 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0120 14:05:13.264450 1060798 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0120 14:05:13.264503 1060798 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
I0120 14:05:13.264566 1060798 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
I0120 14:05:13.264617 1060798 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
I0120 14:05:13.264693 1060798 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
I0120 14:05:13.264816 1060798 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
I0120 14:05:13.264980 1060798 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0120 14:05:13.265097 1060798 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0120 14:05:13.265160 1060798 kubeadm.go:310] [certs] Using the existing "sa" key
I0120 14:05:13.265250 1060798 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0120 14:05:13.376018 1060798 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0120 14:05:13.789822 1060798 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0120 14:05:13.884391 1060798 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0120 14:05:14.207456 1060798 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0120 14:05:14.442708 1060798 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0120 14:05:14.443884 1060798 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0120 14:05:14.447802 1060798 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0120 14:05:11.868693 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:13.869685 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:12.532029 1063160 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.040673038s)
I0120 14:05:12.532063 1063160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0120 14:05:12.818119 1063160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0120 14:05:12.907512 1063160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0120 14:05:12.995770 1063160 api_server.go:52] waiting for apiserver process to appear ...
I0120 14:05:12.995910 1063160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0120 14:05:13.496795 1063160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0120 14:05:13.996059 1063160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0120 14:05:14.022569 1063160 api_server.go:72] duration metric: took 1.026799902s to wait for apiserver process to appear ...
I0120 14:05:14.022606 1063160 api_server.go:88] waiting for apiserver healthz status ...
I0120 14:05:14.022633 1063160 api_server.go:253] Checking apiserver healthz at https://192.168.50.166:8443/healthz ...
I0120 14:05:14.023253 1063160 api_server.go:269] stopped: https://192.168.50.166:8443/healthz: Get "https://192.168.50.166:8443/healthz": dial tcp 192.168.50.166:8443: connect: connection refused
I0120 14:05:14.523764 1063160 api_server.go:253] Checking apiserver healthz at https://192.168.50.166:8443/healthz ...
I0120 14:05:14.449454 1060798 out.go:235] - Booting up control plane ...
I0120 14:05:14.449591 1060798 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0120 14:05:14.449723 1060798 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0120 14:05:14.450498 1060798 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0120 14:05:14.474336 1060798 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0120 14:05:14.486142 1060798 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0120 14:05:14.486368 1060798 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0120 14:05:14.656630 1060798 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0120 14:05:14.656842 1060798 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0120 14:05:15.658053 1060798 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001688461s
I0120 14:05:15.658185 1060798 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0120 14:05:18.095415 1063160 api_server.go:279] https://192.168.50.166:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0120 14:05:18.095452 1063160 api_server.go:103] status: https://192.168.50.166:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0120 14:05:18.095472 1063160 api_server.go:253] Checking apiserver healthz at https://192.168.50.166:8443/healthz ...
I0120 14:05:18.117734 1063160 api_server.go:279] https://192.168.50.166:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0120 14:05:18.117775 1063160 api_server.go:103] status: https://192.168.50.166:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0120 14:05:18.523010 1063160 api_server.go:253] Checking apiserver healthz at https://192.168.50.166:8443/healthz ...
I0120 14:05:18.531327 1063160 api_server.go:279] https://192.168.50.166:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0120 14:05:18.531374 1063160 api_server.go:103] status: https://192.168.50.166:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0120 14:05:19.023177 1063160 api_server.go:253] Checking apiserver healthz at https://192.168.50.166:8443/healthz ...
I0120 14:05:19.033109 1063160 api_server.go:279] https://192.168.50.166:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0120 14:05:19.033139 1063160 api_server.go:103] status: https://192.168.50.166:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0120 14:05:19.522763 1063160 api_server.go:253] Checking apiserver healthz at https://192.168.50.166:8443/healthz ...
I0120 14:05:19.546252 1063160 api_server.go:279] https://192.168.50.166:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0120 14:05:19.546291 1063160 api_server.go:103] status: https://192.168.50.166:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0120 14:05:20.022811 1063160 api_server.go:253] Checking apiserver healthz at https://192.168.50.166:8443/healthz ...
I0120 14:05:20.029777 1063160 api_server.go:279] https://192.168.50.166:8443/healthz returned 200:
ok
I0120 14:05:20.043595 1063160 api_server.go:141] control plane version: v1.32.0
I0120 14:05:20.043704 1063160 api_server.go:131] duration metric: took 6.021087892s to wait for apiserver health ...
I0120 14:05:20.043732 1063160 cni.go:84] Creating CNI manager for ""
I0120 14:05:20.043753 1063160 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0120 14:05:20.046751 1063160 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0120 14:05:16.368848 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:18.372711 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:20.048206 1063160 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0120 14:05:20.067542 1063160 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0120 14:05:20.116639 1063160 system_pods.go:43] waiting for kube-system pods to appear ...
I0120 14:05:20.153739 1063160 system_pods.go:59] 9 kube-system pods found
I0120 14:05:20.153793 1063160 system_pods.go:61] "coredns-668d6bf9bc-mpv44" [382315fb-8bd3-48a2-86ec-ae0f5f2f32a6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0120 14:05:20.153806 1063160 system_pods.go:61] "coredns-668d6bf9bc-t8nnm" [92f31a93-c6cc-414f-9cd2-92e65e91dafd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0120 14:05:20.153818 1063160 system_pods.go:61] "etcd-newest-cni-488874" [71af6d87-d4e6-4cd3-85ee-88500ddac52f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0120 14:05:20.153835 1063160 system_pods.go:61] "kube-apiserver-newest-cni-488874" [36f48149-363f-4ed7-a528-d3f5dc384634] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0120 14:05:20.153857 1063160 system_pods.go:61] "kube-controller-manager-newest-cni-488874" [56662aa4-63e6-48d2-aaa3-99b69a9cbab0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0120 14:05:20.153874 1063160 system_pods.go:61] "kube-proxy-cs8qw" [36baa82d-ba63-4777-894f-8c105690264d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0120 14:05:20.153894 1063160 system_pods.go:61] "kube-scheduler-newest-cni-488874" [1113f67a-580c-4b20-ad28-da730b5d6292] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0120 14:05:20.153914 1063160 system_pods.go:61] "metrics-server-f79f97bbb-kwwbp" [bf28109f-6958-41ec-b019-e0419f4a5093] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0120 14:05:20.153926 1063160 system_pods.go:61] "storage-provisioner" [e8e2b6ce-d4b0-49d9-9e7d-c771eff38584] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0120 14:05:20.153937 1063160 system_pods.go:74] duration metric: took 37.269372ms to wait for pod list to return data ...
I0120 14:05:20.153955 1063160 node_conditions.go:102] verifying NodePressure condition ...
I0120 14:05:20.165337 1063160 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0120 14:05:20.165386 1063160 node_conditions.go:123] node cpu capacity is 2
I0120 14:05:20.165404 1063160 node_conditions.go:105] duration metric: took 11.443297ms to run NodePressure ...
I0120 14:05:20.165431 1063160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0120 14:05:20.606701 1063160 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0120 14:05:20.630689 1063160 ops.go:34] apiserver oom_adj: -16
I0120 14:05:20.630720 1063160 kubeadm.go:597] duration metric: took 9.583881876s to restartPrimaryControlPlane
I0120 14:05:20.630735 1063160 kubeadm.go:394] duration metric: took 9.665718124s to StartCluster
I0120 14:05:20.630770 1063160 settings.go:142] acquiring lock: {Name:mked7f2376b8a06c64dcfd911ab4b0d95ecdbe2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 14:05:20.630867 1063160 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20242-998973/kubeconfig
I0120 14:05:20.632794 1063160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-998973/kubeconfig: {Name:mkc416e4f6e76f39025eb204e9812d9900c83215 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 14:05:20.633135 1063160 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.166 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0120 14:05:20.633353 1063160 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0120 14:05:20.633478 1063160 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-488874"
I0120 14:05:20.633502 1063160 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-488874"
I0120 14:05:20.633573 1063160 addons.go:69] Setting dashboard=true in profile "newest-cni-488874"
W0120 14:05:20.633590 1063160 addons.go:247] addon storage-provisioner should already be in state true
I0120 14:05:20.633579 1063160 addons.go:69] Setting metrics-server=true in profile "newest-cni-488874"
I0120 14:05:20.633598 1063160 addons.go:238] Setting addon dashboard=true in "newest-cni-488874"
W0120 14:05:20.633606 1063160 addons.go:247] addon dashboard should already be in state true
I0120 14:05:20.633607 1063160 addons.go:238] Setting addon metrics-server=true in "newest-cni-488874"
W0120 14:05:20.633617 1063160 addons.go:247] addon metrics-server should already be in state true
I0120 14:05:20.633629 1063160 host.go:66] Checking if "newest-cni-488874" exists ...
I0120 14:05:20.633635 1063160 host.go:66] Checking if "newest-cni-488874" exists ...
I0120 14:05:20.633545 1063160 addons.go:69] Setting default-storageclass=true in profile "newest-cni-488874"
I0120 14:05:20.633763 1063160 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-488874"
I0120 14:05:20.634080 1063160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 14:05:20.634122 1063160 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 14:05:20.634170 1063160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 14:05:20.634233 1063160 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 14:05:20.633533 1063160 config.go:182] Loaded profile config "newest-cni-488874": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 14:05:20.634250 1063160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 14:05:20.634302 1063160 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 14:05:20.633644 1063160 host.go:66] Checking if "newest-cni-488874" exists ...
I0120 14:05:20.634680 1063160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 14:05:20.634727 1063160 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 14:05:20.635689 1063160 out.go:177] * Verifying Kubernetes components...
I0120 14:05:20.637584 1063160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0120 14:05:20.656161 1063160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33565
I0120 14:05:20.656828 1063160 main.go:141] libmachine: () Calling .GetVersion
I0120 14:05:20.657442 1063160 main.go:141] libmachine: Using API Version 1
I0120 14:05:20.657461 1063160 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 14:05:20.657809 1063160 main.go:141] libmachine: () Calling .GetMachineName
I0120 14:05:20.657871 1063160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43287
I0120 14:05:20.658038 1063160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33069
I0120 14:05:20.658145 1063160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41881
I0120 14:05:20.658269 1063160 main.go:141] libmachine: () Calling .GetVersion
I0120 14:05:20.658336 1063160 main.go:141] libmachine: () Calling .GetVersion
I0120 14:05:20.658943 1063160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 14:05:20.658989 1063160 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 14:05:20.659328 1063160 main.go:141] libmachine: Using API Version 1
I0120 14:05:20.659345 1063160 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 14:05:20.659720 1063160 main.go:141] libmachine: () Calling .GetVersion
I0120 14:05:20.659880 1063160 main.go:141] libmachine: () Calling .GetMachineName
I0120 14:05:20.660060 1063160 main.go:141] libmachine: Using API Version 1
I0120 14:05:20.660093 1063160 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 14:05:20.660172 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetState
I0120 14:05:20.660415 1063160 main.go:141] libmachine: () Calling .GetMachineName
I0120 14:05:20.661044 1063160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 14:05:20.661120 1063160 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 14:05:20.665263 1063160 main.go:141] libmachine: Using API Version 1
I0120 14:05:20.665288 1063160 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 14:05:20.665954 1063160 main.go:141] libmachine: () Calling .GetMachineName
I0120 14:05:20.666578 1063160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 14:05:20.666620 1063160 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 14:05:20.683034 1063160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33033
I0120 14:05:20.683326 1063160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37021
I0120 14:05:20.684199 1063160 main.go:141] libmachine: () Calling .GetVersion
I0120 14:05:20.684289 1063160 main.go:141] libmachine: () Calling .GetVersion
I0120 14:05:20.685038 1063160 main.go:141] libmachine: Using API Version 1
I0120 14:05:20.685070 1063160 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 14:05:20.685247 1063160 main.go:141] libmachine: Using API Version 1
I0120 14:05:20.685265 1063160 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 14:05:20.685542 1063160 main.go:141] libmachine: () Calling .GetMachineName
I0120 14:05:20.685774 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetState
I0120 14:05:20.685975 1063160 main.go:141] libmachine: () Calling .GetMachineName
I0120 14:05:20.686146 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetState
I0120 14:05:20.691280 1063160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34093
I0120 14:05:20.691740 1063160 main.go:141] libmachine: () Calling .GetVersion
I0120 14:05:20.692270 1063160 main.go:141] libmachine: Using API Version 1
I0120 14:05:20.692293 1063160 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 14:05:20.692739 1063160 main.go:141] libmachine: () Calling .GetMachineName
I0120 14:05:20.693015 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetState
I0120 14:05:20.731352 1063160 addons.go:238] Setting addon default-storageclass=true in "newest-cni-488874"
W0120 14:05:20.731384 1063160 addons.go:247] addon default-storageclass should already be in state true
I0120 14:05:20.731420 1063160 host.go:66] Checking if "newest-cni-488874" exists ...
I0120 14:05:20.731819 1063160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 14:05:20.731899 1063160 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 14:05:20.732143 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .DriverName
I0120 14:05:20.732149 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .DriverName
I0120 14:05:20.732234 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .DriverName
I0120 14:05:20.734806 1063160 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0120 14:05:20.734814 1063160 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0120 14:05:20.735922 1063160 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0120 14:05:20.736428 1063160 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0120 14:05:20.736456 1063160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0120 14:05:20.736487 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHHostname
I0120 14:05:20.737437 1063160 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0120 14:05:21.661193 1060798 kubeadm.go:310] [api-check] The API server is healthy after 6.00301289s
I0120 14:05:21.679639 1060798 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0120 14:05:21.697225 1060798 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0120 14:05:21.729640 1060798 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0120 14:05:21.730176 1060798 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-553677 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0120 14:05:21.743570 1060798 kubeadm.go:310] [bootstrap-token] Using token: qgu27t.iap2ani2n2k7zkjw
I0120 14:05:20.738718 1063160 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0120 14:05:20.738745 1063160 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0120 14:05:20.738782 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHHostname
I0120 14:05:20.739196 1063160 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0120 14:05:20.739219 1063160 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0120 14:05:20.739249 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHHostname
I0120 14:05:20.741831 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:20.742632 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:cb:b8", ip: ""} in network mk-newest-cni-488874: {Iface:virbr2 ExpiryTime:2025-01-20 15:04:59 +0000 UTC Type:0 Mac:52:54:00:01:cb:b8 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:newest-cni-488874 Clientid:01:52:54:00:01:cb:b8}
I0120 14:05:20.742658 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined IP address 192.168.50.166 and MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:20.743356 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:20.743407 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHPort
I0120 14:05:20.743639 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHKeyPath
I0120 14:05:20.743790 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:20.743820 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHUsername
I0120 14:05:20.744020 1063160 sshutil.go:53] new ssh client: &{IP:192.168.50.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/newest-cni-488874/id_rsa Username:docker}
I0120 14:05:20.744122 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:cb:b8", ip: ""} in network mk-newest-cni-488874: {Iface:virbr2 ExpiryTime:2025-01-20 15:04:59 +0000 UTC Type:0 Mac:52:54:00:01:cb:b8 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:newest-cni-488874 Clientid:01:52:54:00:01:cb:b8}
I0120 14:05:20.744163 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined IP address 192.168.50.166 and MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:20.744243 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHPort
I0120 14:05:20.744334 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:cb:b8", ip: ""} in network mk-newest-cni-488874: {Iface:virbr2 ExpiryTime:2025-01-20 15:04:59 +0000 UTC Type:0 Mac:52:54:00:01:cb:b8 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:newest-cni-488874 Clientid:01:52:54:00:01:cb:b8}
I0120 14:05:20.744350 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined IP address 192.168.50.166 and MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:20.744654 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHPort
I0120 14:05:20.744707 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHKeyPath
I0120 14:05:20.744862 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHKeyPath
I0120 14:05:20.744869 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHUsername
I0120 14:05:20.744998 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHUsername
I0120 14:05:20.745067 1063160 sshutil.go:53] new ssh client: &{IP:192.168.50.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/newest-cni-488874/id_rsa Username:docker}
I0120 14:05:20.749103 1063160 sshutil.go:53] new ssh client: &{IP:192.168.50.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/newest-cni-488874/id_rsa Username:docker}
I0120 14:05:20.774519 1063160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35237
I0120 14:05:20.774980 1063160 main.go:141] libmachine: () Calling .GetVersion
I0120 14:05:20.775531 1063160 main.go:141] libmachine: Using API Version 1
I0120 14:05:20.775558 1063160 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 14:05:20.775918 1063160 main.go:141] libmachine: () Calling .GetMachineName
I0120 14:05:20.776511 1063160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 14:05:20.776562 1063160 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 14:05:20.797766 1063160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46241
I0120 14:05:20.798308 1063160 main.go:141] libmachine: () Calling .GetVersion
I0120 14:05:20.798841 1063160 main.go:141] libmachine: Using API Version 1
I0120 14:05:20.798869 1063160 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 14:05:20.799392 1063160 main.go:141] libmachine: () Calling .GetMachineName
I0120 14:05:20.799597 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetState
I0120 14:05:20.802504 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .DriverName
I0120 14:05:20.802837 1063160 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0120 14:05:20.802856 1063160 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0120 14:05:20.802878 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHHostname
I0120 14:05:20.806526 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:20.807070 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:cb:b8", ip: ""} in network mk-newest-cni-488874: {Iface:virbr2 ExpiryTime:2025-01-20 15:04:59 +0000 UTC Type:0 Mac:52:54:00:01:cb:b8 Iaid: IPaddr:192.168.50.166 Prefix:24 Hostname:newest-cni-488874 Clientid:01:52:54:00:01:cb:b8}
I0120 14:05:20.807096 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | domain newest-cni-488874 has defined IP address 192.168.50.166 and MAC address 52:54:00:01:cb:b8 in network mk-newest-cni-488874
I0120 14:05:20.807327 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHPort
I0120 14:05:20.807558 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHKeyPath
I0120 14:05:20.807743 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .GetSSHUsername
I0120 14:05:20.807887 1063160 sshutil.go:53] new ssh client: &{IP:192.168.50.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/newest-cni-488874/id_rsa Username:docker}
I0120 14:05:20.920926 1063160 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0120 14:05:20.942953 1063160 api_server.go:52] waiting for apiserver process to appear ...
I0120 14:05:20.943092 1063160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0120 14:05:20.964946 1063160 api_server.go:72] duration metric: took 331.745037ms to wait for apiserver process to appear ...
I0120 14:05:20.965007 1063160 api_server.go:88] waiting for apiserver healthz status ...
I0120 14:05:20.965033 1063160 api_server.go:253] Checking apiserver healthz at https://192.168.50.166:8443/healthz ...
I0120 14:05:20.974335 1063160 api_server.go:279] https://192.168.50.166:8443/healthz returned 200:
ok
I0120 14:05:20.976530 1063160 api_server.go:141] control plane version: v1.32.0
I0120 14:05:20.976563 1063160 api_server.go:131] duration metric: took 11.547041ms to wait for apiserver health ...
I0120 14:05:20.976576 1063160 system_pods.go:43] waiting for kube-system pods to appear ...
I0120 14:05:20.988080 1063160 system_pods.go:59] 9 kube-system pods found
I0120 14:05:20.988125 1063160 system_pods.go:61] "coredns-668d6bf9bc-mpv44" [382315fb-8bd3-48a2-86ec-ae0f5f2f32a6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0120 14:05:20.988136 1063160 system_pods.go:61] "coredns-668d6bf9bc-t8nnm" [92f31a93-c6cc-414f-9cd2-92e65e91dafd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0120 14:05:20.988146 1063160 system_pods.go:61] "etcd-newest-cni-488874" [71af6d87-d4e6-4cd3-85ee-88500ddac52f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0120 14:05:20.988160 1063160 system_pods.go:61] "kube-apiserver-newest-cni-488874" [36f48149-363f-4ed7-a528-d3f5dc384634] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0120 14:05:20.988169 1063160 system_pods.go:61] "kube-controller-manager-newest-cni-488874" [56662aa4-63e6-48d2-aaa3-99b69a9cbab0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0120 14:05:20.988179 1063160 system_pods.go:61] "kube-proxy-cs8qw" [36baa82d-ba63-4777-894f-8c105690264d] Running
I0120 14:05:20.988189 1063160 system_pods.go:61] "kube-scheduler-newest-cni-488874" [1113f67a-580c-4b20-ad28-da730b5d6292] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0120 14:05:20.988208 1063160 system_pods.go:61] "metrics-server-f79f97bbb-kwwbp" [bf28109f-6958-41ec-b019-e0419f4a5093] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0120 14:05:20.988217 1063160 system_pods.go:61] "storage-provisioner" [e8e2b6ce-d4b0-49d9-9e7d-c771eff38584] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0120 14:05:20.988232 1063160 system_pods.go:74] duration metric: took 11.646417ms to wait for pod list to return data ...
I0120 14:05:20.988247 1063160 default_sa.go:34] waiting for default service account to be created ...
I0120 14:05:20.992460 1063160 default_sa.go:45] found service account: "default"
I0120 14:05:20.992499 1063160 default_sa.go:55] duration metric: took 4.243767ms for default service account to be created ...
I0120 14:05:20.992516 1063160 kubeadm.go:582] duration metric: took 359.326348ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
I0120 14:05:20.992566 1063160 node_conditions.go:102] verifying NodePressure condition ...
I0120 14:05:21.000430 1063160 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0120 14:05:21.000469 1063160 node_conditions.go:123] node cpu capacity is 2
I0120 14:05:21.000485 1063160 node_conditions.go:105] duration metric: took 7.912327ms to run NodePressure ...
I0120 14:05:21.000502 1063160 start.go:241] waiting for startup goroutines ...
I0120 14:05:21.007595 1063160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0120 14:05:21.171225 1063160 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0120 14:05:21.171261 1063160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0120 14:05:21.237055 1063160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0120 14:05:21.319699 1063160 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0120 14:05:21.319729 1063160 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0120 14:05:21.403010 1063160 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0120 14:05:21.403048 1063160 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0120 14:05:21.420219 1063160 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0120 14:05:21.420263 1063160 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0120 14:05:21.542358 1063160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0120 14:05:21.581020 1063160 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0120 14:05:21.581058 1063160 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0120 14:05:21.654677 1063160 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0120 14:05:21.654718 1063160 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0120 14:05:21.830895 1063160 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0120 14:05:21.830928 1063160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0120 14:05:21.935679 1063160 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0120 14:05:21.935718 1063160 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0120 14:05:21.745349 1060798 out.go:235] - Configuring RBAC rules ...
I0120 14:05:21.745503 1060798 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0120 14:05:21.754153 1060798 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0120 14:05:21.765952 1060798 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0120 14:05:21.771799 1060798 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0120 14:05:21.779054 1060798 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0120 14:05:21.785557 1060798 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0120 14:05:22.071797 1060798 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0120 14:05:22.539495 1060798 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0120 14:05:23.070019 1060798 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0120 14:05:23.071157 1060798 kubeadm.go:310]
I0120 14:05:23.071304 1060798 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0120 14:05:23.071330 1060798 kubeadm.go:310]
I0120 14:05:23.071427 1060798 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0120 14:05:23.071438 1060798 kubeadm.go:310]
I0120 14:05:23.071470 1060798 kubeadm.go:310] mkdir -p $HOME/.kube
I0120 14:05:23.071548 1060798 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0120 14:05:23.071621 1060798 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0120 14:05:23.071631 1060798 kubeadm.go:310]
I0120 14:05:23.071735 1060798 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0120 14:05:23.071777 1060798 kubeadm.go:310]
I0120 14:05:23.071865 1060798 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0120 14:05:23.071878 1060798 kubeadm.go:310]
I0120 14:05:23.071948 1060798 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0120 14:05:23.072051 1060798 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0120 14:05:23.072144 1060798 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0120 14:05:23.072164 1060798 kubeadm.go:310]
I0120 14:05:23.072309 1060798 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0120 14:05:23.072412 1060798 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0120 14:05:23.072423 1060798 kubeadm.go:310]
I0120 14:05:23.072537 1060798 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qgu27t.iap2ani2n2k7zkjw \
I0120 14:05:23.072690 1060798 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:6117bbf309b9c45faa7e855ae242c4a905187b8a6090715b408f9a384f87e114 \
I0120 14:05:23.072722 1060798 kubeadm.go:310] --control-plane
I0120 14:05:23.072736 1060798 kubeadm.go:310]
I0120 14:05:23.072848 1060798 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0120 14:05:23.072867 1060798 kubeadm.go:310]
I0120 14:05:23.072985 1060798 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qgu27t.iap2ani2n2k7zkjw \
I0120 14:05:23.073167 1060798 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:6117bbf309b9c45faa7e855ae242c4a905187b8a6090715b408f9a384f87e114
I0120 14:05:23.075375 1060798 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0120 14:05:23.075417 1060798 cni.go:84] Creating CNI manager for ""
I0120 14:05:23.075445 1060798 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0120 14:05:23.077601 1060798 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0120 14:05:22.089375 1063160 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0120 14:05:22.089408 1063160 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0120 14:05:22.106543 1063160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.098904515s)
I0120 14:05:22.106605 1063160 main.go:141] libmachine: Making call to close driver server
I0120 14:05:22.106616 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .Close
I0120 14:05:22.106956 1063160 main.go:141] libmachine: Successfully made call to close driver server
I0120 14:05:22.106976 1063160 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 14:05:22.106987 1063160 main.go:141] libmachine: Making call to close driver server
I0120 14:05:22.106995 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .Close
I0120 14:05:22.107275 1063160 main.go:141] libmachine: Successfully made call to close driver server
I0120 14:05:22.107300 1063160 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 14:05:22.115066 1063160 main.go:141] libmachine: Making call to close driver server
I0120 14:05:22.115096 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .Close
I0120 14:05:22.115528 1063160 main.go:141] libmachine: Successfully made call to close driver server
I0120 14:05:22.115548 1063160 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 14:05:22.115574 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | Closing plugin on server side
I0120 14:05:22.180167 1063160 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0120 14:05:22.180241 1063160 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0120 14:05:22.292751 1063160 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0120 14:05:22.292788 1063160 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0120 14:05:22.338119 1063160 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0120 14:05:22.338160 1063160 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0120 14:05:22.382828 1063160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0120 14:05:23.300334 1063160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.063234672s)
I0120 14:05:23.300414 1063160 main.go:141] libmachine: Making call to close driver server
I0120 14:05:23.300431 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .Close
I0120 14:05:23.300841 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | Closing plugin on server side
I0120 14:05:23.302811 1063160 main.go:141] libmachine: Successfully made call to close driver server
I0120 14:05:23.302833 1063160 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 14:05:23.302843 1063160 main.go:141] libmachine: Making call to close driver server
I0120 14:05:23.302852 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .Close
I0120 14:05:23.303171 1063160 main.go:141] libmachine: Successfully made call to close driver server
I0120 14:05:23.303199 1063160 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 14:05:23.485044 1063160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.942633159s)
I0120 14:05:23.485191 1063160 main.go:141] libmachine: Making call to close driver server
I0120 14:05:23.485213 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .Close
I0120 14:05:23.485695 1063160 main.go:141] libmachine: Successfully made call to close driver server
I0120 14:05:23.485755 1063160 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 14:05:23.485721 1063160 main.go:141] libmachine: (newest-cni-488874) DBG | Closing plugin on server side
I0120 14:05:23.485784 1063160 main.go:141] libmachine: Making call to close driver server
I0120 14:05:23.485883 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .Close
I0120 14:05:23.486182 1063160 main.go:141] libmachine: Successfully made call to close driver server
I0120 14:05:23.486207 1063160 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 14:05:23.486222 1063160 addons.go:479] Verifying addon metrics-server=true in "newest-cni-488874"
I0120 14:05:24.106931 1063160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.72402581s)
I0120 14:05:24.107000 1063160 main.go:141] libmachine: Making call to close driver server
I0120 14:05:24.107019 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .Close
I0120 14:05:24.107417 1063160 main.go:141] libmachine: Successfully made call to close driver server
I0120 14:05:24.107441 1063160 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 14:05:24.107460 1063160 main.go:141] libmachine: Making call to close driver server
I0120 14:05:24.107472 1063160 main.go:141] libmachine: (newest-cni-488874) Calling .Close
I0120 14:05:24.107745 1063160 main.go:141] libmachine: Successfully made call to close driver server
I0120 14:05:24.107766 1063160 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 14:05:24.109654 1063160 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p newest-cni-488874 addons enable metrics-server
I0120 14:05:24.111210 1063160 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
I0120 14:05:23.079121 1060798 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0120 14:05:23.091937 1060798 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0120 14:05:23.116874 1060798 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0120 14:05:23.116939 1060798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 14:05:23.116978 1060798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-553677 minikube.k8s.io/updated_at=2025_01_20T14_05_23_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f660fd437a405b9b88cc818704e12bd22ce270c3 minikube.k8s.io/name=embed-certs-553677 minikube.k8s.io/primary=true
I0120 14:05:23.148895 1060798 ops.go:34] apiserver oom_adj: -16
I0120 14:05:23.378558 1060798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 14:05:23.879347 1060798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 14:05:24.112676 1063160 addons.go:514] duration metric: took 3.479328497s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
I0120 14:05:24.112745 1063160 start.go:246] waiting for cluster config update ...
I0120 14:05:24.112766 1063160 start.go:255] writing updated cluster config ...
I0120 14:05:24.113104 1063160 ssh_runner.go:195] Run: rm -f paused
I0120 14:05:24.170991 1063160 start.go:600] kubectl: 1.32.1, cluster: 1.32.0 (minor skew: 0)
I0120 14:05:24.173034 1063160 out.go:177] * Done! kubectl is now configured to use "newest-cni-488874" cluster and "default" namespace by default
I0120 14:05:20.868649 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:22.869758 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:24.870554 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:24.379349 1060798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 14:05:24.879187 1060798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 14:05:25.379285 1060798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 14:05:25.879105 1060798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 14:05:26.379133 1060798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 14:05:26.478857 1060798 kubeadm.go:1113] duration metric: took 3.36197683s to wait for elevateKubeSystemPrivileges
I0120 14:05:26.478907 1060798 kubeadm.go:394] duration metric: took 4m36.924060891s to StartCluster
I0120 14:05:26.478935 1060798 settings.go:142] acquiring lock: {Name:mked7f2376b8a06c64dcfd911ab4b0d95ecdbe2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 14:05:26.479036 1060798 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20242-998973/kubeconfig
I0120 14:05:26.481214 1060798 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-998973/kubeconfig: {Name:mkc416e4f6e76f39025eb204e9812d9900c83215 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 14:05:26.481626 1060798 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.136 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0120 14:05:26.481760 1060798 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0120 14:05:26.481876 1060798 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-553677"
I0120 14:05:26.481896 1060798 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-553677"
W0120 14:05:26.481905 1060798 addons.go:247] addon storage-provisioner should already be in state true
I0120 14:05:26.481906 1060798 config.go:182] Loaded profile config "embed-certs-553677": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 14:05:26.481916 1060798 addons.go:69] Setting default-storageclass=true in profile "embed-certs-553677"
I0120 14:05:26.481942 1060798 addons.go:69] Setting metrics-server=true in profile "embed-certs-553677"
I0120 14:05:26.481958 1060798 addons.go:238] Setting addon metrics-server=true in "embed-certs-553677"
W0120 14:05:26.481970 1060798 addons.go:247] addon metrics-server should already be in state true
I0120 14:05:26.481989 1060798 host.go:66] Checking if "embed-certs-553677" exists ...
I0120 14:05:26.481957 1060798 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-553677"
I0120 14:05:26.481936 1060798 host.go:66] Checking if "embed-certs-553677" exists ...
I0120 14:05:26.482431 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 14:05:26.482468 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 14:05:26.481939 1060798 addons.go:69] Setting dashboard=true in profile "embed-certs-553677"
I0120 14:05:26.482542 1060798 addons.go:238] Setting addon dashboard=true in "embed-certs-553677"
W0120 14:05:26.482554 1060798 addons.go:247] addon dashboard should already be in state true
I0120 14:05:26.482556 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 14:05:26.482578 1060798 host.go:66] Checking if "embed-certs-553677" exists ...
I0120 14:05:26.482592 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 14:05:26.482543 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 14:05:26.482710 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 14:05:26.482972 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 14:05:26.483025 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 14:05:26.483426 1060798 out.go:177] * Verifying Kubernetes components...
I0120 14:05:26.485000 1060798 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0120 14:05:26.503670 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35691
I0120 14:05:26.503915 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35367
I0120 14:05:26.503956 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44967
I0120 14:05:26.504290 1060798 main.go:141] libmachine: () Calling .GetVersion
I0120 14:05:26.504434 1060798 main.go:141] libmachine: () Calling .GetVersion
I0120 14:05:26.505146 1060798 main.go:141] libmachine: Using API Version 1
I0120 14:05:26.505154 1060798 main.go:141] libmachine: Using API Version 1
I0120 14:05:26.505171 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 14:05:26.505175 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 14:05:26.505608 1060798 main.go:141] libmachine: () Calling .GetMachineName
I0120 14:05:26.505613 1060798 main.go:141] libmachine: () Calling .GetMachineName
I0120 14:05:26.505894 1060798 main.go:141] libmachine: () Calling .GetVersion
I0120 14:05:26.506345 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 14:05:26.506391 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 14:05:26.506479 1060798 main.go:141] libmachine: Using API Version 1
I0120 14:05:26.506502 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 14:05:26.506645 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 14:05:26.506751 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 14:05:26.507010 1060798 main.go:141] libmachine: () Calling .GetMachineName
I0120 14:05:26.507160 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36779
I0120 14:05:26.507428 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetState
I0120 14:05:26.507754 1060798 main.go:141] libmachine: () Calling .GetVersion
I0120 14:05:26.508311 1060798 main.go:141] libmachine: Using API Version 1
I0120 14:05:26.508336 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 14:05:26.508797 1060798 main.go:141] libmachine: () Calling .GetMachineName
I0120 14:05:26.509512 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 14:05:26.509563 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 14:05:26.512304 1060798 addons.go:238] Setting addon default-storageclass=true in "embed-certs-553677"
W0120 14:05:26.512327 1060798 addons.go:247] addon default-storageclass should already be in state true
I0120 14:05:26.512357 1060798 host.go:66] Checking if "embed-certs-553677" exists ...
I0120 14:05:26.512623 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 14:05:26.512672 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 14:05:26.529326 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45833
I0120 14:05:26.530030 1060798 main.go:141] libmachine: () Calling .GetVersion
I0120 14:05:26.530626 1060798 main.go:141] libmachine: Using API Version 1
I0120 14:05:26.530648 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 14:05:26.530699 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35851
I0120 14:05:26.530970 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44857
I0120 14:05:26.531055 1060798 main.go:141] libmachine: () Calling .GetVersion
I0120 14:05:26.531380 1060798 main.go:141] libmachine: () Calling .GetMachineName
I0120 14:05:26.531456 1060798 main.go:141] libmachine: () Calling .GetVersion
I0120 14:05:26.531589 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetState
I0120 14:05:26.531641 1060798 main.go:141] libmachine: Using API Version 1
I0120 14:05:26.531661 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 14:05:26.532129 1060798 main.go:141] libmachine: Using API Version 1
I0120 14:05:26.532156 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 14:05:26.532234 1060798 main.go:141] libmachine: () Calling .GetMachineName
I0120 14:05:26.532425 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetState
I0120 14:05:26.532428 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36165
I0120 14:05:26.532828 1060798 main.go:141] libmachine: () Calling .GetVersion
I0120 14:05:26.532931 1060798 main.go:141] libmachine: () Calling .GetMachineName
I0120 14:05:26.533311 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetState
I0120 14:05:26.535196 1060798 main.go:141] libmachine: Using API Version 1
I0120 14:05:26.535230 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 14:05:26.535639 1060798 main.go:141] libmachine: () Calling .GetMachineName
I0120 14:05:26.536245 1060798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0120 14:05:26.536293 1060798 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 14:05:26.537777 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .DriverName
I0120 14:05:26.538423 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .DriverName
I0120 14:05:26.538544 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .DriverName
I0120 14:05:26.540631 1060798 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0120 14:05:26.540639 1060798 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0120 14:05:26.540707 1060798 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0120 14:05:26.541975 1060798 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0120 14:05:26.541997 1060798 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0120 14:05:26.542019 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHHostname
I0120 14:05:26.542075 1060798 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0120 14:05:26.542094 1060798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0120 14:05:26.542115 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHHostname
I0120 14:05:26.544926 1060798 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0120 14:05:26.546368 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0120 14:05:26.546392 1060798 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0120 14:05:26.546418 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHHostname
I0120 14:05:26.549578 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:05:26.549713 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:05:26.553664 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
I0120 14:05:26.553690 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:05:26.553947 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHPort
I0120 14:05:26.554117 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
I0120 14:05:26.554221 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHUsername
I0120 14:05:26.554305 1060798 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/embed-certs-553677/id_rsa Username:docker}
I0120 14:05:26.554626 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:05:26.554889 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
I0120 14:05:26.554914 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:05:26.555102 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHPort
I0120 14:05:26.555168 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
I0120 14:05:26.555182 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:05:26.555284 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
I0120 14:05:26.555340 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHPort
I0120 14:05:26.555596 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHUsername
I0120 14:05:26.555691 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
I0120 14:05:26.555715 1060798 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/embed-certs-553677/id_rsa Username:docker}
I0120 14:05:26.555883 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHUsername
I0120 14:05:26.556015 1060798 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/embed-certs-553677/id_rsa Username:docker}
I0120 14:05:26.560724 1060798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34799
I0120 14:05:26.561235 1060798 main.go:141] libmachine: () Calling .GetVersion
I0120 14:05:26.561723 1060798 main.go:141] libmachine: Using API Version 1
I0120 14:05:26.561738 1060798 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 14:05:26.562059 1060798 main.go:141] libmachine: () Calling .GetMachineName
I0120 14:05:26.562297 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetState
I0120 14:05:26.564026 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .DriverName
I0120 14:05:26.564278 1060798 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0120 14:05:26.564290 1060798 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0120 14:05:26.564304 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHHostname
I0120 14:05:26.567858 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:05:26.568393 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:7a:fd", ip: ""} in network mk-embed-certs-553677: {Iface:virbr1 ExpiryTime:2025-01-20 15:00:37 +0000 UTC Type:0 Mac:52:54:00:7d:7a:fd Iaid: IPaddr:192.168.72.136 Prefix:24 Hostname:embed-certs-553677 Clientid:01:52:54:00:7d:7a:fd}
I0120 14:05:26.568433 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | domain embed-certs-553677 has defined IP address 192.168.72.136 and MAC address 52:54:00:7d:7a:fd in network mk-embed-certs-553677
I0120 14:05:26.568556 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHPort
I0120 14:05:26.568742 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHKeyPath
I0120 14:05:26.568910 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .GetSSHUsername
I0120 14:05:26.569124 1060798 sshutil.go:53] new ssh client: &{IP:192.168.72.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-998973/.minikube/machines/embed-certs-553677/id_rsa Username:docker}
I0120 14:05:26.773077 1060798 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0120 14:05:26.800362 1060798 node_ready.go:35] waiting up to 6m0s for node "embed-certs-553677" to be "Ready" ...
I0120 14:05:26.843740 1060798 node_ready.go:49] node "embed-certs-553677" has status "Ready":"True"
I0120 14:05:26.843780 1060798 node_ready.go:38] duration metric: took 43.372924ms for node "embed-certs-553677" to be "Ready" ...
I0120 14:05:26.843796 1060798 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0120 14:05:26.873119 1060798 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0120 14:05:26.873149 1060798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0120 14:05:26.874981 1060798 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
I0120 14:05:26.906789 1060798 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0120 14:05:26.940145 1060798 pod_ready.go:93] pod "etcd-embed-certs-553677" in "kube-system" namespace has status "Ready":"True"
I0120 14:05:26.940190 1060798 pod_ready.go:82] duration metric: took 65.181123ms for pod "etcd-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
I0120 14:05:26.940211 1060798 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
I0120 14:05:26.969325 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0120 14:05:26.969365 1060798 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0120 14:05:26.969405 1060798 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0120 14:05:26.989583 1060798 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0120 14:05:26.989615 1060798 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0120 14:05:27.153235 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0120 14:05:27.153271 1060798 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0120 14:05:27.177818 1060798 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0120 14:05:27.177844 1060798 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0120 14:05:27.342345 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0120 14:05:27.342379 1060798 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0120 14:05:27.474579 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0120 14:05:27.474615 1060798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0120 14:05:27.480859 1060798 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0120 14:05:27.583861 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0120 14:05:27.583897 1060798 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0120 14:05:27.625368 1060798 main.go:141] libmachine: Making call to close driver server
I0120 14:05:27.625405 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .Close
I0120 14:05:27.625755 1060798 main.go:141] libmachine: Successfully made call to close driver server
I0120 14:05:27.625774 1060798 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 14:05:27.625784 1060798 main.go:141] libmachine: Making call to close driver server
I0120 14:05:27.625792 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .Close
I0120 14:05:27.626090 1060798 main.go:141] libmachine: Successfully made call to close driver server
I0120 14:05:27.626113 1060798 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 14:05:27.626136 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | Closing plugin on server side
I0120 14:05:27.642156 1060798 main.go:141] libmachine: Making call to close driver server
I0120 14:05:27.642194 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .Close
I0120 14:05:27.642522 1060798 main.go:141] libmachine: Successfully made call to close driver server
I0120 14:05:27.642553 1060798 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 14:05:27.884652 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0120 14:05:27.884699 1060798 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0120 14:05:28.031119 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0120 14:05:28.031155 1060798 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0120 14:05:28.145159 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0120 14:05:28.145199 1060798 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0120 14:05:28.273725 1060798 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0120 14:05:28.273765 1060798 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0120 14:05:28.506539 1060798 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0120 14:05:28.887655 1060798 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.918209178s)
I0120 14:05:28.887715 1060798 main.go:141] libmachine: Making call to close driver server
I0120 14:05:28.887730 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .Close
I0120 14:05:28.888066 1060798 main.go:141] libmachine: Successfully made call to close driver server
I0120 14:05:28.888078 1060798 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 14:05:28.888089 1060798 main.go:141] libmachine: Making call to close driver server
I0120 14:05:28.888098 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .Close
I0120 14:05:28.889637 1060798 main.go:141] libmachine: Successfully made call to close driver server
I0120 14:05:28.889660 1060798 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 14:05:28.889672 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | Closing plugin on server side
I0120 14:05:28.971702 1060798 pod_ready.go:103] pod "kube-apiserver-embed-certs-553677" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:27.380463 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:29.867706 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:29.421863 1060798 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.940948518s)
I0120 14:05:29.421940 1060798 main.go:141] libmachine: Making call to close driver server
I0120 14:05:29.421960 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .Close
I0120 14:05:29.422340 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | Closing plugin on server side
I0120 14:05:29.422359 1060798 main.go:141] libmachine: Successfully made call to close driver server
I0120 14:05:29.422381 1060798 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 14:05:29.422399 1060798 main.go:141] libmachine: Making call to close driver server
I0120 14:05:29.422412 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .Close
I0120 14:05:29.422673 1060798 main.go:141] libmachine: Successfully made call to close driver server
I0120 14:05:29.422690 1060798 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 14:05:29.422702 1060798 addons.go:479] Verifying addon metrics-server=true in "embed-certs-553677"
I0120 14:05:29.422725 1060798 main.go:141] libmachine: (embed-certs-553677) DBG | Closing plugin on server side
I0120 14:05:30.228977 1060798 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.722367434s)
I0120 14:05:30.229039 1060798 main.go:141] libmachine: Making call to close driver server
I0120 14:05:30.229056 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .Close
I0120 14:05:30.229398 1060798 main.go:141] libmachine: Successfully made call to close driver server
I0120 14:05:30.229421 1060798 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 14:05:30.229431 1060798 main.go:141] libmachine: Making call to close driver server
I0120 14:05:30.229439 1060798 main.go:141] libmachine: (embed-certs-553677) Calling .Close
I0120 14:05:30.229692 1060798 main.go:141] libmachine: Successfully made call to close driver server
I0120 14:05:30.229713 1060798 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 14:05:30.231477 1060798 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p embed-certs-553677 addons enable metrics-server
I0120 14:05:30.233108 1060798 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
I0120 14:05:30.234556 1060798 addons.go:514] duration metric: took 3.752807641s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
I0120 14:05:31.446192 1060798 pod_ready.go:103] pod "kube-apiserver-embed-certs-553677" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:33.453220 1060798 pod_ready.go:103] pod "kube-apiserver-embed-certs-553677" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:31.868796 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:34.366219 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:35.447702 1060798 pod_ready.go:93] pod "kube-apiserver-embed-certs-553677" in "kube-system" namespace has status "Ready":"True"
I0120 14:05:35.447735 1060798 pod_ready.go:82] duration metric: took 8.507515045s for pod "kube-apiserver-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
I0120 14:05:35.447745 1060798 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
I0120 14:05:35.453130 1060798 pod_ready.go:93] pod "kube-controller-manager-embed-certs-553677" in "kube-system" namespace has status "Ready":"True"
I0120 14:05:35.453158 1060798 pod_ready.go:82] duration metric: took 5.406746ms for pod "kube-controller-manager-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
I0120 14:05:35.453169 1060798 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p5rcq" in "kube-system" namespace to be "Ready" ...
I0120 14:05:35.457466 1060798 pod_ready.go:93] pod "kube-proxy-p5rcq" in "kube-system" namespace has status "Ready":"True"
I0120 14:05:35.457492 1060798 pod_ready.go:82] duration metric: took 4.316578ms for pod "kube-proxy-p5rcq" in "kube-system" namespace to be "Ready" ...
I0120 14:05:35.457503 1060798 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
I0120 14:05:35.462012 1060798 pod_ready.go:93] pod "kube-scheduler-embed-certs-553677" in "kube-system" namespace has status "Ready":"True"
I0120 14:05:35.462036 1060798 pod_ready.go:82] duration metric: took 4.526901ms for pod "kube-scheduler-embed-certs-553677" in "kube-system" namespace to be "Ready" ...
I0120 14:05:35.462043 1060798 pod_ready.go:39] duration metric: took 8.61823381s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0120 14:05:35.462058 1060798 api_server.go:52] waiting for apiserver process to appear ...
I0120 14:05:35.462111 1060798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0120 14:05:35.477958 1060798 api_server.go:72] duration metric: took 8.996279799s to wait for apiserver process to appear ...
I0120 14:05:35.477993 1060798 api_server.go:88] waiting for apiserver healthz status ...
I0120 14:05:35.478019 1060798 api_server.go:253] Checking apiserver healthz at https://192.168.72.136:8443/healthz ...
I0120 14:05:35.483505 1060798 api_server.go:279] https://192.168.72.136:8443/healthz returned 200:
ok
I0120 14:05:35.484660 1060798 api_server.go:141] control plane version: v1.32.0
I0120 14:05:35.484690 1060798 api_server.go:131] duration metric: took 6.687782ms to wait for apiserver health ...
I0120 14:05:35.484701 1060798 system_pods.go:43] waiting for kube-system pods to appear ...
I0120 14:05:35.490073 1060798 system_pods.go:59] 9 kube-system pods found
I0120 14:05:35.490118 1060798 system_pods.go:61] "coredns-668d6bf9bc-6dk7s" [1bba3148-0210-42ef-b08e-753e16365e33] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0120 14:05:35.490129 1060798 system_pods.go:61] "coredns-668d6bf9bc-88phd" [dfc4947e-a505-4337-99d3-156d86f7646c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0120 14:05:35.490137 1060798 system_pods.go:61] "etcd-embed-certs-553677" [c915afbe-8665-4fbf-bcae-802c3ca214dd] Running
I0120 14:05:35.490143 1060798 system_pods.go:61] "kube-apiserver-embed-certs-553677" [d04063fb-d723-4a72-9024-0b6ceba0f09d] Running
I0120 14:05:35.490149 1060798 system_pods.go:61] "kube-controller-manager-embed-certs-553677" [c6de6703-1533-4391-a67e-f2c2208ebafe] Running
I0120 14:05:35.490153 1060798 system_pods.go:61] "kube-proxy-p5rcq" [3a9ddae1-ef67-4dd0-9c18-77e796c37d2a] Running
I0120 14:05:35.490157 1060798 system_pods.go:61] "kube-scheduler-embed-certs-553677" [10c63c3f-0748-4af6-94fb-a0ca644d4c61] Running
I0120 14:05:35.490164 1060798 system_pods.go:61] "metrics-server-f79f97bbb-b92sv" [f9b310a6-0d19-4084-aeae-ebe0a395d042] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0120 14:05:35.490170 1060798 system_pods.go:61] "storage-provisioner" [a6c0070e-1e3c-48af-80e3-1c3ca9163bf8] Running
I0120 14:05:35.490179 1060798 system_pods.go:74] duration metric: took 5.471078ms to wait for pod list to return data ...
I0120 14:05:35.490189 1060798 default_sa.go:34] waiting for default service account to be created ...
I0120 14:05:35.493453 1060798 default_sa.go:45] found service account: "default"
I0120 14:05:35.493489 1060798 default_sa.go:55] duration metric: took 3.2839ms for default service account to be created ...
I0120 14:05:35.493500 1060798 system_pods.go:137] waiting for k8s-apps to be running ...
I0120 14:05:35.648514 1060798 system_pods.go:87] 9 kube-system pods found
I0120 14:05:36.368251 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:38.868623 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:40.870222 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:43.380035 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:45.867670 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:47.868766 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:50.366281 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:52.367402 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:54.866983 1061268 pod_ready.go:103] pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace has status "Ready":"False"
I0120 14:05:54.867021 1061268 pod_ready.go:82] duration metric: took 4m0.006587828s for pod "metrics-server-f79f97bbb-nfwzt" in "kube-system" namespace to be "Ready" ...
E0120 14:05:54.867033 1061268 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I0120 14:05:54.867044 1061268 pod_ready.go:39] duration metric: took 4m2.396402991s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0120 14:05:54.867065 1061268 api_server.go:52] waiting for apiserver process to appear ...
I0120 14:05:54.867111 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0120 14:05:54.867187 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0120 14:05:54.917788 1061268 cri.go:89] found id: "a9ed8990e1e45ed3e66d50d5186fd18a9a49d174764d13518a367c70af79ac42"
I0120 14:05:54.917828 1061268 cri.go:89] found id: "9408104b2bc8b50ee9af342f70bc2efe4f5d0d8ed725752d2410341e89eaf2d3"
I0120 14:05:54.917834 1061268 cri.go:89] found id: ""
I0120 14:05:54.917844 1061268 logs.go:282] 2 containers: [a9ed8990e1e45ed3e66d50d5186fd18a9a49d174764d13518a367c70af79ac42 9408104b2bc8b50ee9af342f70bc2efe4f5d0d8ed725752d2410341e89eaf2d3]
I0120 14:05:54.917927 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:05:54.923337 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:05:54.929376 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0120 14:05:54.929471 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0120 14:05:54.984694 1061268 cri.go:89] found id: "02f8fd9a4d7f7489ef0d4dae899392452870298c4f4f3fc4dbc49bbb093fa8c4"
I0120 14:05:54.984729 1061268 cri.go:89] found id: "0c75dbdd53be2aea0fee6f43c237ec3823742d24c77fa7aa5c162d5060b63778"
I0120 14:05:54.984733 1061268 cri.go:89] found id: ""
I0120 14:05:54.984750 1061268 logs.go:282] 2 containers: [02f8fd9a4d7f7489ef0d4dae899392452870298c4f4f3fc4dbc49bbb093fa8c4 0c75dbdd53be2aea0fee6f43c237ec3823742d24c77fa7aa5c162d5060b63778]
I0120 14:05:54.984816 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:05:54.990663 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:05:54.996383 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0120 14:05:54.996492 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0120 14:05:55.041873 1061268 cri.go:89] found id: "c636f6d1c3ca5dd55a05bbf771a63c37227ad3df0fd3d5dab51e53fc6df96092"
I0120 14:05:55.041908 1061268 cri.go:89] found id: "cd1d209a519e93dddd3d69c8cf6e1621c397f6092ef4ebd4c8993f3dd30e93a9"
I0120 14:05:55.041914 1061268 cri.go:89] found id: ""
I0120 14:05:55.041924 1061268 logs.go:282] 2 containers: [c636f6d1c3ca5dd55a05bbf771a63c37227ad3df0fd3d5dab51e53fc6df96092 cd1d209a519e93dddd3d69c8cf6e1621c397f6092ef4ebd4c8993f3dd30e93a9]
I0120 14:05:55.042006 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:05:55.047779 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:05:55.052191 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0120 14:05:55.052295 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0120 14:05:55.102560 1061268 cri.go:89] found id: "09950dfdbb046a2d1cf3eba5089e996dfe6231845671d427e66b0cad90bf8f29"
I0120 14:05:55.102594 1061268 cri.go:89] found id: "d1a5df859c03ba84889ee72da494516a50cc1bf133273d47bac6178c72fa7943"
I0120 14:05:55.102600 1061268 cri.go:89] found id: ""
I0120 14:05:55.102610 1061268 logs.go:282] 2 containers: [09950dfdbb046a2d1cf3eba5089e996dfe6231845671d427e66b0cad90bf8f29 d1a5df859c03ba84889ee72da494516a50cc1bf133273d47bac6178c72fa7943]
I0120 14:05:55.102682 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:05:55.108113 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:05:55.113558 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0120 14:05:55.113644 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0120 14:05:55.158692 1061268 cri.go:89] found id: "3197926b668d65c9cc4278471fe0c989cedd9bdf1e9c87350cb5ded287057802"
I0120 14:05:55.158724 1061268 cri.go:89] found id: "aa534af95004b3fe077a52a38cb923b05fe406529dc0c1e243a6cc8ae8cf9c33"
I0120 14:05:55.158729 1061268 cri.go:89] found id: ""
I0120 14:05:55.158739 1061268 logs.go:282] 2 containers: [3197926b668d65c9cc4278471fe0c989cedd9bdf1e9c87350cb5ded287057802 aa534af95004b3fe077a52a38cb923b05fe406529dc0c1e243a6cc8ae8cf9c33]
I0120 14:05:55.158801 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:05:55.163830 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:05:55.168399 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0120 14:05:55.168475 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0120 14:05:55.224035 1061268 cri.go:89] found id: "c4a49dd11c4c0335a34b0e5833e665d2f1a441c11db8699ac9aaf3362af1f78a"
I0120 14:05:55.224068 1061268 cri.go:89] found id: "025055b322462e565ed23f8ebb14ed974477c0e5971086843408d0e8e8cda1d3"
I0120 14:05:55.224074 1061268 cri.go:89] found id: ""
I0120 14:05:55.224085 1061268 logs.go:282] 2 containers: [c4a49dd11c4c0335a34b0e5833e665d2f1a441c11db8699ac9aaf3362af1f78a 025055b322462e565ed23f8ebb14ed974477c0e5971086843408d0e8e8cda1d3]
I0120 14:05:55.224158 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:05:55.228696 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:05:55.233948 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0120 14:05:55.234023 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0120 14:05:55.272989 1061268 cri.go:89] found id: ""
I0120 14:05:55.273024 1061268 logs.go:282] 0 containers: []
W0120 14:05:55.273033 1061268 logs.go:284] No container was found matching "kindnet"
I0120 14:05:55.273040 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0120 14:05:55.273108 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0120 14:05:55.320199 1061268 cri.go:89] found id: "192cb1e4fd5fa6b6311d1fe52195efc15d7ce27cfa9ed0ea912b325201ed9ca8"
I0120 14:05:55.320229 1061268 cri.go:89] found id: "f6b0dd0725bbc2262c7577838d0fa44c98e0b2efaad3fa074a4c33ec86c8aa22"
I0120 14:05:55.320233 1061268 cri.go:89] found id: ""
I0120 14:05:55.320242 1061268 logs.go:282] 2 containers: [192cb1e4fd5fa6b6311d1fe52195efc15d7ce27cfa9ed0ea912b325201ed9ca8 f6b0dd0725bbc2262c7577838d0fa44c98e0b2efaad3fa074a4c33ec86c8aa22]
I0120 14:05:55.320295 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:05:55.325143 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:05:55.334774 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0120 14:05:55.334849 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0120 14:05:55.383085 1061268 cri.go:89] found id: "6b9e25a753ca8553bec8243a9c3fa594e3662dcd41a394825afe800e98d90a39"
I0120 14:05:55.383121 1061268 cri.go:89] found id: ""
I0120 14:05:55.383133 1061268 logs.go:282] 1 containers: [6b9e25a753ca8553bec8243a9c3fa594e3662dcd41a394825afe800e98d90a39]
I0120 14:05:55.383194 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:05:55.388216 1061268 logs.go:123] Gathering logs for kube-apiserver [9408104b2bc8b50ee9af342f70bc2efe4f5d0d8ed725752d2410341e89eaf2d3] ...
I0120 14:05:55.388253 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9408104b2bc8b50ee9af342f70bc2efe4f5d0d8ed725752d2410341e89eaf2d3"
I0120 14:05:55.446118 1061268 logs.go:123] Gathering logs for kube-controller-manager [025055b322462e565ed23f8ebb14ed974477c0e5971086843408d0e8e8cda1d3] ...
I0120 14:05:55.446152 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 025055b322462e565ed23f8ebb14ed974477c0e5971086843408d0e8e8cda1d3"
I0120 14:05:55.502498 1061268 logs.go:123] Gathering logs for storage-provisioner [192cb1e4fd5fa6b6311d1fe52195efc15d7ce27cfa9ed0ea912b325201ed9ca8] ...
I0120 14:05:55.502538 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 192cb1e4fd5fa6b6311d1fe52195efc15d7ce27cfa9ed0ea912b325201ed9ca8"
I0120 14:05:55.548359 1061268 logs.go:123] Gathering logs for containerd ...
I0120 14:05:55.548400 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0120 14:05:55.609421 1061268 logs.go:123] Gathering logs for dmesg ...
I0120 14:05:55.609469 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0120 14:05:55.625660 1061268 logs.go:123] Gathering logs for etcd [02f8fd9a4d7f7489ef0d4dae899392452870298c4f4f3fc4dbc49bbb093fa8c4] ...
I0120 14:05:55.625702 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02f8fd9a4d7f7489ef0d4dae899392452870298c4f4f3fc4dbc49bbb093fa8c4"
I0120 14:05:55.674797 1061268 logs.go:123] Gathering logs for kube-proxy [3197926b668d65c9cc4278471fe0c989cedd9bdf1e9c87350cb5ded287057802] ...
I0120 14:05:55.674846 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3197926b668d65c9cc4278471fe0c989cedd9bdf1e9c87350cb5ded287057802"
I0120 14:05:55.715726 1061268 logs.go:123] Gathering logs for kube-proxy [aa534af95004b3fe077a52a38cb923b05fe406529dc0c1e243a6cc8ae8cf9c33] ...
I0120 14:05:55.715767 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa534af95004b3fe077a52a38cb923b05fe406529dc0c1e243a6cc8ae8cf9c33"
I0120 14:05:55.755665 1061268 logs.go:123] Gathering logs for kube-controller-manager [c4a49dd11c4c0335a34b0e5833e665d2f1a441c11db8699ac9aaf3362af1f78a] ...
I0120 14:05:55.755700 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4a49dd11c4c0335a34b0e5833e665d2f1a441c11db8699ac9aaf3362af1f78a"
I0120 14:05:55.815422 1061268 logs.go:123] Gathering logs for storage-provisioner [f6b0dd0725bbc2262c7577838d0fa44c98e0b2efaad3fa074a4c33ec86c8aa22] ...
I0120 14:05:55.815464 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6b0dd0725bbc2262c7577838d0fa44c98e0b2efaad3fa074a4c33ec86c8aa22"
I0120 14:05:55.858791 1061268 logs.go:123] Gathering logs for kubelet ...
I0120 14:05:55.858825 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0120 14:05:55.937094 1061268 logs.go:123] Gathering logs for kube-apiserver [a9ed8990e1e45ed3e66d50d5186fd18a9a49d174764d13518a367c70af79ac42] ...
I0120 14:05:55.937147 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9ed8990e1e45ed3e66d50d5186fd18a9a49d174764d13518a367c70af79ac42"
I0120 14:05:55.991427 1061268 logs.go:123] Gathering logs for etcd [0c75dbdd53be2aea0fee6f43c237ec3823742d24c77fa7aa5c162d5060b63778] ...
I0120 14:05:55.991470 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c75dbdd53be2aea0fee6f43c237ec3823742d24c77fa7aa5c162d5060b63778"
I0120 14:05:56.037962 1061268 logs.go:123] Gathering logs for coredns [c636f6d1c3ca5dd55a05bbf771a63c37227ad3df0fd3d5dab51e53fc6df96092] ...
I0120 14:05:56.038001 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c636f6d1c3ca5dd55a05bbf771a63c37227ad3df0fd3d5dab51e53fc6df96092"
I0120 14:05:56.078966 1061268 logs.go:123] Gathering logs for coredns [cd1d209a519e93dddd3d69c8cf6e1621c397f6092ef4ebd4c8993f3dd30e93a9] ...
I0120 14:05:56.079002 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd1d209a519e93dddd3d69c8cf6e1621c397f6092ef4ebd4c8993f3dd30e93a9"
I0120 14:05:56.123993 1061268 logs.go:123] Gathering logs for kube-scheduler [d1a5df859c03ba84889ee72da494516a50cc1bf133273d47bac6178c72fa7943] ...
I0120 14:05:56.124028 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1a5df859c03ba84889ee72da494516a50cc1bf133273d47bac6178c72fa7943"
I0120 14:05:56.174816 1061268 logs.go:123] Gathering logs for container status ...
I0120 14:05:56.174864 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0120 14:05:56.227944 1061268 logs.go:123] Gathering logs for describe nodes ...
I0120 14:05:56.227981 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0120 14:05:56.373827 1061268 logs.go:123] Gathering logs for kube-scheduler [09950dfdbb046a2d1cf3eba5089e996dfe6231845671d427e66b0cad90bf8f29] ...
I0120 14:05:56.373869 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09950dfdbb046a2d1cf3eba5089e996dfe6231845671d427e66b0cad90bf8f29"
I0120 14:05:56.419064 1061268 logs.go:123] Gathering logs for kubernetes-dashboard [6b9e25a753ca8553bec8243a9c3fa594e3662dcd41a394825afe800e98d90a39] ...
I0120 14:05:56.419105 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b9e25a753ca8553bec8243a9c3fa594e3662dcd41a394825afe800e98d90a39"
I0120 14:05:58.964349 1061268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0120 14:05:58.982111 1061268 api_server.go:72] duration metric: took 4m11.799712602s to wait for apiserver process to appear ...
I0120 14:05:58.982153 1061268 api_server.go:88] waiting for apiserver healthz status ...
I0120 14:05:58.982207 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0120 14:05:58.982267 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0120 14:05:59.022764 1061268 cri.go:89] found id: "a9ed8990e1e45ed3e66d50d5186fd18a9a49d174764d13518a367c70af79ac42"
I0120 14:05:59.022791 1061268 cri.go:89] found id: "9408104b2bc8b50ee9af342f70bc2efe4f5d0d8ed725752d2410341e89eaf2d3"
I0120 14:05:59.022795 1061268 cri.go:89] found id: ""
I0120 14:05:59.022802 1061268 logs.go:282] 2 containers: [a9ed8990e1e45ed3e66d50d5186fd18a9a49d174764d13518a367c70af79ac42 9408104b2bc8b50ee9af342f70bc2efe4f5d0d8ed725752d2410341e89eaf2d3]
I0120 14:05:59.022867 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:05:59.028807 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:05:59.035066 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0120 14:05:59.035164 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0120 14:05:59.081381 1061268 cri.go:89] found id: "02f8fd9a4d7f7489ef0d4dae899392452870298c4f4f3fc4dbc49bbb093fa8c4"
I0120 14:05:59.081414 1061268 cri.go:89] found id: "0c75dbdd53be2aea0fee6f43c237ec3823742d24c77fa7aa5c162d5060b63778"
I0120 14:05:59.081420 1061268 cri.go:89] found id: ""
I0120 14:05:59.081431 1061268 logs.go:282] 2 containers: [02f8fd9a4d7f7489ef0d4dae899392452870298c4f4f3fc4dbc49bbb093fa8c4 0c75dbdd53be2aea0fee6f43c237ec3823742d24c77fa7aa5c162d5060b63778]
I0120 14:05:59.081503 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:05:59.086586 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:05:59.090923 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0120 14:05:59.091001 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0120 14:05:59.129195 1061268 cri.go:89] found id: "c636f6d1c3ca5dd55a05bbf771a63c37227ad3df0fd3d5dab51e53fc6df96092"
I0120 14:05:59.129229 1061268 cri.go:89] found id: "cd1d209a519e93dddd3d69c8cf6e1621c397f6092ef4ebd4c8993f3dd30e93a9"
I0120 14:05:59.129235 1061268 cri.go:89] found id: ""
I0120 14:05:59.129245 1061268 logs.go:282] 2 containers: [c636f6d1c3ca5dd55a05bbf771a63c37227ad3df0fd3d5dab51e53fc6df96092 cd1d209a519e93dddd3d69c8cf6e1621c397f6092ef4ebd4c8993f3dd30e93a9]
I0120 14:05:59.129310 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:05:59.134230 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:05:59.139242 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0120 14:05:59.139365 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0120 14:05:59.180849 1061268 cri.go:89] found id: "09950dfdbb046a2d1cf3eba5089e996dfe6231845671d427e66b0cad90bf8f29"
I0120 14:05:59.180884 1061268 cri.go:89] found id: "d1a5df859c03ba84889ee72da494516a50cc1bf133273d47bac6178c72fa7943"
I0120 14:05:59.180888 1061268 cri.go:89] found id: ""
I0120 14:05:59.180898 1061268 logs.go:282] 2 containers: [09950dfdbb046a2d1cf3eba5089e996dfe6231845671d427e66b0cad90bf8f29 d1a5df859c03ba84889ee72da494516a50cc1bf133273d47bac6178c72fa7943]
I0120 14:05:59.180991 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:05:59.185950 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:05:59.190730 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0120 14:05:59.190818 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0120 14:05:59.232733 1061268 cri.go:89] found id: "3197926b668d65c9cc4278471fe0c989cedd9bdf1e9c87350cb5ded287057802"
I0120 14:05:59.232774 1061268 cri.go:89] found id: "aa534af95004b3fe077a52a38cb923b05fe406529dc0c1e243a6cc8ae8cf9c33"
I0120 14:05:59.232780 1061268 cri.go:89] found id: ""
I0120 14:05:59.232790 1061268 logs.go:282] 2 containers: [3197926b668d65c9cc4278471fe0c989cedd9bdf1e9c87350cb5ded287057802 aa534af95004b3fe077a52a38cb923b05fe406529dc0c1e243a6cc8ae8cf9c33]
I0120 14:05:59.232861 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:05:59.238473 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:05:59.243105 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0120 14:05:59.243188 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0120 14:05:59.282102 1061268 cri.go:89] found id: "c4a49dd11c4c0335a34b0e5833e665d2f1a441c11db8699ac9aaf3362af1f78a"
I0120 14:05:59.282132 1061268 cri.go:89] found id: "025055b322462e565ed23f8ebb14ed974477c0e5971086843408d0e8e8cda1d3"
I0120 14:05:59.282137 1061268 cri.go:89] found id: ""
I0120 14:05:59.282147 1061268 logs.go:282] 2 containers: [c4a49dd11c4c0335a34b0e5833e665d2f1a441c11db8699ac9aaf3362af1f78a 025055b322462e565ed23f8ebb14ed974477c0e5971086843408d0e8e8cda1d3]
I0120 14:05:59.282231 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:05:59.286964 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:05:59.291689 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0120 14:05:59.291770 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0120 14:05:59.335494 1061268 cri.go:89] found id: ""
I0120 14:05:59.335532 1061268 logs.go:282] 0 containers: []
W0120 14:05:59.335542 1061268 logs.go:284] No container was found matching "kindnet"
I0120 14:05:59.335550 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0120 14:05:59.335622 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0120 14:05:59.382200 1061268 cri.go:89] found id: "6b9e25a753ca8553bec8243a9c3fa594e3662dcd41a394825afe800e98d90a39"
I0120 14:05:59.382235 1061268 cri.go:89] found id: ""
I0120 14:05:59.382245 1061268 logs.go:282] 1 containers: [6b9e25a753ca8553bec8243a9c3fa594e3662dcd41a394825afe800e98d90a39]
I0120 14:05:59.382303 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:05:59.387107 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0120 14:05:59.387204 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0120 14:05:59.425237 1061268 cri.go:89] found id: "192cb1e4fd5fa6b6311d1fe52195efc15d7ce27cfa9ed0ea912b325201ed9ca8"
I0120 14:05:59.425271 1061268 cri.go:89] found id: "f6b0dd0725bbc2262c7577838d0fa44c98e0b2efaad3fa074a4c33ec86c8aa22"
I0120 14:05:59.425277 1061268 cri.go:89] found id: ""
I0120 14:05:59.425286 1061268 logs.go:282] 2 containers: [192cb1e4fd5fa6b6311d1fe52195efc15d7ce27cfa9ed0ea912b325201ed9ca8 f6b0dd0725bbc2262c7577838d0fa44c98e0b2efaad3fa074a4c33ec86c8aa22]
I0120 14:05:59.425364 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:05:59.430391 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:05:59.435125 1061268 logs.go:123] Gathering logs for kube-controller-manager [025055b322462e565ed23f8ebb14ed974477c0e5971086843408d0e8e8cda1d3] ...
I0120 14:05:59.435168 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 025055b322462e565ed23f8ebb14ed974477c0e5971086843408d0e8e8cda1d3"
I0120 14:05:59.489718 1061268 logs.go:123] Gathering logs for storage-provisioner [f6b0dd0725bbc2262c7577838d0fa44c98e0b2efaad3fa074a4c33ec86c8aa22] ...
I0120 14:05:59.489762 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6b0dd0725bbc2262c7577838d0fa44c98e0b2efaad3fa074a4c33ec86c8aa22"
I0120 14:05:59.536425 1061268 logs.go:123] Gathering logs for dmesg ...
I0120 14:05:59.536471 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0120 14:05:59.555049 1061268 logs.go:123] Gathering logs for kube-proxy [3197926b668d65c9cc4278471fe0c989cedd9bdf1e9c87350cb5ded287057802] ...
I0120 14:05:59.555087 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3197926b668d65c9cc4278471fe0c989cedd9bdf1e9c87350cb5ded287057802"
I0120 14:05:59.597084 1061268 logs.go:123] Gathering logs for kube-proxy [aa534af95004b3fe077a52a38cb923b05fe406529dc0c1e243a6cc8ae8cf9c33] ...
I0120 14:05:59.597125 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa534af95004b3fe077a52a38cb923b05fe406529dc0c1e243a6cc8ae8cf9c33"
I0120 14:05:59.638067 1061268 logs.go:123] Gathering logs for kube-controller-manager [c4a49dd11c4c0335a34b0e5833e665d2f1a441c11db8699ac9aaf3362af1f78a] ...
I0120 14:05:59.638100 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4a49dd11c4c0335a34b0e5833e665d2f1a441c11db8699ac9aaf3362af1f78a"
I0120 14:05:59.706228 1061268 logs.go:123] Gathering logs for kube-apiserver [a9ed8990e1e45ed3e66d50d5186fd18a9a49d174764d13518a367c70af79ac42] ...
I0120 14:05:59.706274 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9ed8990e1e45ed3e66d50d5186fd18a9a49d174764d13518a367c70af79ac42"
I0120 14:05:59.753770 1061268 logs.go:123] Gathering logs for kube-apiserver [9408104b2bc8b50ee9af342f70bc2efe4f5d0d8ed725752d2410341e89eaf2d3] ...
I0120 14:05:59.753834 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9408104b2bc8b50ee9af342f70bc2efe4f5d0d8ed725752d2410341e89eaf2d3"
I0120 14:05:59.806616 1061268 logs.go:123] Gathering logs for etcd [02f8fd9a4d7f7489ef0d4dae899392452870298c4f4f3fc4dbc49bbb093fa8c4] ...
I0120 14:05:59.806661 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02f8fd9a4d7f7489ef0d4dae899392452870298c4f4f3fc4dbc49bbb093fa8c4"
I0120 14:05:59.855127 1061268 logs.go:123] Gathering logs for containerd ...
I0120 14:05:59.855170 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0120 14:05:59.912684 1061268 logs.go:123] Gathering logs for kube-scheduler [d1a5df859c03ba84889ee72da494516a50cc1bf133273d47bac6178c72fa7943] ...
I0120 14:05:59.912740 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1a5df859c03ba84889ee72da494516a50cc1bf133273d47bac6178c72fa7943"
I0120 14:05:59.961054 1061268 logs.go:123] Gathering logs for kubernetes-dashboard [6b9e25a753ca8553bec8243a9c3fa594e3662dcd41a394825afe800e98d90a39] ...
I0120 14:05:59.961101 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b9e25a753ca8553bec8243a9c3fa594e3662dcd41a394825afe800e98d90a39"
I0120 14:05:59.999981 1061268 logs.go:123] Gathering logs for storage-provisioner [192cb1e4fd5fa6b6311d1fe52195efc15d7ce27cfa9ed0ea912b325201ed9ca8] ...
I0120 14:06:00.000018 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 192cb1e4fd5fa6b6311d1fe52195efc15d7ce27cfa9ed0ea912b325201ed9ca8"
I0120 14:06:00.043176 1061268 logs.go:123] Gathering logs for container status ...
I0120 14:06:00.043224 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0120 14:06:00.088503 1061268 logs.go:123] Gathering logs for kubelet ...
I0120 14:06:00.088544 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0120 14:06:00.165437 1061268 logs.go:123] Gathering logs for describe nodes ...
I0120 14:06:00.165486 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0120 14:06:00.295533 1061268 logs.go:123] Gathering logs for etcd [0c75dbdd53be2aea0fee6f43c237ec3823742d24c77fa7aa5c162d5060b63778] ...
I0120 14:06:00.295579 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c75dbdd53be2aea0fee6f43c237ec3823742d24c77fa7aa5c162d5060b63778"
I0120 14:06:00.357211 1061268 logs.go:123] Gathering logs for kube-scheduler [09950dfdbb046a2d1cf3eba5089e996dfe6231845671d427e66b0cad90bf8f29] ...
I0120 14:06:00.357243 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09950dfdbb046a2d1cf3eba5089e996dfe6231845671d427e66b0cad90bf8f29"
I0120 14:06:00.405816 1061268 logs.go:123] Gathering logs for coredns [c636f6d1c3ca5dd55a05bbf771a63c37227ad3df0fd3d5dab51e53fc6df96092] ...
I0120 14:06:00.405851 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c636f6d1c3ca5dd55a05bbf771a63c37227ad3df0fd3d5dab51e53fc6df96092"
I0120 14:06:00.448633 1061268 logs.go:123] Gathering logs for coredns [cd1d209a519e93dddd3d69c8cf6e1621c397f6092ef4ebd4c8993f3dd30e93a9] ...
I0120 14:06:00.448668 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd1d209a519e93dddd3d69c8cf6e1621c397f6092ef4ebd4c8993f3dd30e93a9"
I0120 14:06:02.993693 1061268 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8444/healthz ...
I0120 14:06:03.000837 1061268 api_server.go:279] https://192.168.39.158:8444/healthz returned 200:
ok
I0120 14:06:03.002153 1061268 api_server.go:141] control plane version: v1.32.0
I0120 14:06:03.002197 1061268 api_server.go:131] duration metric: took 4.020033778s to wait for apiserver health ...
I0120 14:06:03.002209 1061268 system_pods.go:43] waiting for kube-system pods to appear ...
I0120 14:06:03.002251 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0120 14:06:03.002366 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0120 14:06:03.042946 1061268 cri.go:89] found id: "a9ed8990e1e45ed3e66d50d5186fd18a9a49d174764d13518a367c70af79ac42"
I0120 14:06:03.042976 1061268 cri.go:89] found id: "9408104b2bc8b50ee9af342f70bc2efe4f5d0d8ed725752d2410341e89eaf2d3"
I0120 14:06:03.042982 1061268 cri.go:89] found id: ""
I0120 14:06:03.042992 1061268 logs.go:282] 2 containers: [a9ed8990e1e45ed3e66d50d5186fd18a9a49d174764d13518a367c70af79ac42 9408104b2bc8b50ee9af342f70bc2efe4f5d0d8ed725752d2410341e89eaf2d3]
I0120 14:06:03.043060 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:06:03.048245 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:06:03.054072 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0120 14:06:03.054163 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0120 14:06:03.095236 1061268 cri.go:89] found id: "02f8fd9a4d7f7489ef0d4dae899392452870298c4f4f3fc4dbc49bbb093fa8c4"
I0120 14:06:03.095267 1061268 cri.go:89] found id: "0c75dbdd53be2aea0fee6f43c237ec3823742d24c77fa7aa5c162d5060b63778"
I0120 14:06:03.095273 1061268 cri.go:89] found id: ""
I0120 14:06:03.095283 1061268 logs.go:282] 2 containers: [02f8fd9a4d7f7489ef0d4dae899392452870298c4f4f3fc4dbc49bbb093fa8c4 0c75dbdd53be2aea0fee6f43c237ec3823742d24c77fa7aa5c162d5060b63778]
I0120 14:06:03.095356 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:06:03.101394 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:06:03.106404 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0120 14:06:03.106491 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0120 14:06:03.147747 1061268 cri.go:89] found id: "c636f6d1c3ca5dd55a05bbf771a63c37227ad3df0fd3d5dab51e53fc6df96092"
I0120 14:06:03.147777 1061268 cri.go:89] found id: "cd1d209a519e93dddd3d69c8cf6e1621c397f6092ef4ebd4c8993f3dd30e93a9"
I0120 14:06:03.147784 1061268 cri.go:89] found id: ""
I0120 14:06:03.147794 1061268 logs.go:282] 2 containers: [c636f6d1c3ca5dd55a05bbf771a63c37227ad3df0fd3d5dab51e53fc6df96092 cd1d209a519e93dddd3d69c8cf6e1621c397f6092ef4ebd4c8993f3dd30e93a9]
I0120 14:06:03.147859 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:06:03.153519 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:06:03.158247 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0120 14:06:03.158333 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0120 14:06:03.197681 1061268 cri.go:89] found id: "09950dfdbb046a2d1cf3eba5089e996dfe6231845671d427e66b0cad90bf8f29"
I0120 14:06:03.197714 1061268 cri.go:89] found id: "d1a5df859c03ba84889ee72da494516a50cc1bf133273d47bac6178c72fa7943"
I0120 14:06:03.197721 1061268 cri.go:89] found id: ""
I0120 14:06:03.197731 1061268 logs.go:282] 2 containers: [09950dfdbb046a2d1cf3eba5089e996dfe6231845671d427e66b0cad90bf8f29 d1a5df859c03ba84889ee72da494516a50cc1bf133273d47bac6178c72fa7943]
I0120 14:06:03.197798 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:06:03.204003 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:06:03.208671 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0120 14:06:03.208757 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0120 14:06:03.256457 1061268 cri.go:89] found id: "3197926b668d65c9cc4278471fe0c989cedd9bdf1e9c87350cb5ded287057802"
I0120 14:06:03.256487 1061268 cri.go:89] found id: "aa534af95004b3fe077a52a38cb923b05fe406529dc0c1e243a6cc8ae8cf9c33"
I0120 14:06:03.256491 1061268 cri.go:89] found id: ""
I0120 14:06:03.256499 1061268 logs.go:282] 2 containers: [3197926b668d65c9cc4278471fe0c989cedd9bdf1e9c87350cb5ded287057802 aa534af95004b3fe077a52a38cb923b05fe406529dc0c1e243a6cc8ae8cf9c33]
I0120 14:06:03.256549 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:06:03.262961 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:06:03.268145 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0120 14:06:03.268221 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0120 14:06:03.312818 1061268 cri.go:89] found id: "c4a49dd11c4c0335a34b0e5833e665d2f1a441c11db8699ac9aaf3362af1f78a"
I0120 14:06:03.312847 1061268 cri.go:89] found id: "025055b322462e565ed23f8ebb14ed974477c0e5971086843408d0e8e8cda1d3"
I0120 14:06:03.312851 1061268 cri.go:89] found id: ""
I0120 14:06:03.312859 1061268 logs.go:282] 2 containers: [c4a49dd11c4c0335a34b0e5833e665d2f1a441c11db8699ac9aaf3362af1f78a 025055b322462e565ed23f8ebb14ed974477c0e5971086843408d0e8e8cda1d3]
I0120 14:06:03.312920 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:06:03.318436 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:06:03.323982 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0120 14:06:03.324066 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0120 14:06:03.380746 1061268 cri.go:89] found id: ""
I0120 14:06:03.380779 1061268 logs.go:282] 0 containers: []
W0120 14:06:03.380787 1061268 logs.go:284] No container was found matching "kindnet"
I0120 14:06:03.380794 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0120 14:06:03.380858 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0120 14:06:03.429155 1061268 cri.go:89] found id: "6b9e25a753ca8553bec8243a9c3fa594e3662dcd41a394825afe800e98d90a39"
I0120 14:06:03.429183 1061268 cri.go:89] found id: ""
I0120 14:06:03.429193 1061268 logs.go:282] 1 containers: [6b9e25a753ca8553bec8243a9c3fa594e3662dcd41a394825afe800e98d90a39]
I0120 14:06:03.429264 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:06:03.434046 1061268 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0120 14:06:03.434129 1061268 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0120 14:06:03.478490 1061268 cri.go:89] found id: "192cb1e4fd5fa6b6311d1fe52195efc15d7ce27cfa9ed0ea912b325201ed9ca8"
I0120 14:06:03.478519 1061268 cri.go:89] found id: "f6b0dd0725bbc2262c7577838d0fa44c98e0b2efaad3fa074a4c33ec86c8aa22"
I0120 14:06:03.478523 1061268 cri.go:89] found id: ""
I0120 14:06:03.478531 1061268 logs.go:282] 2 containers: [192cb1e4fd5fa6b6311d1fe52195efc15d7ce27cfa9ed0ea912b325201ed9ca8 f6b0dd0725bbc2262c7577838d0fa44c98e0b2efaad3fa074a4c33ec86c8aa22]
I0120 14:06:03.478587 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:06:03.483366 1061268 ssh_runner.go:195] Run: which crictl
I0120 14:06:03.488091 1061268 logs.go:123] Gathering logs for etcd [0c75dbdd53be2aea0fee6f43c237ec3823742d24c77fa7aa5c162d5060b63778] ...
I0120 14:06:03.488125 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c75dbdd53be2aea0fee6f43c237ec3823742d24c77fa7aa5c162d5060b63778"
I0120 14:06:03.537772 1061268 logs.go:123] Gathering logs for coredns [c636f6d1c3ca5dd55a05bbf771a63c37227ad3df0fd3d5dab51e53fc6df96092] ...
I0120 14:06:03.537823 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c636f6d1c3ca5dd55a05bbf771a63c37227ad3df0fd3d5dab51e53fc6df96092"
I0120 14:06:03.584100 1061268 logs.go:123] Gathering logs for kube-controller-manager [c4a49dd11c4c0335a34b0e5833e665d2f1a441c11db8699ac9aaf3362af1f78a] ...
I0120 14:06:03.584134 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4a49dd11c4c0335a34b0e5833e665d2f1a441c11db8699ac9aaf3362af1f78a"
I0120 14:06:03.646671 1061268 logs.go:123] Gathering logs for container status ...
I0120 14:06:03.646723 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0120 14:06:03.706076 1061268 logs.go:123] Gathering logs for coredns [cd1d209a519e93dddd3d69c8cf6e1621c397f6092ef4ebd4c8993f3dd30e93a9] ...
I0120 14:06:03.706119 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd1d209a519e93dddd3d69c8cf6e1621c397f6092ef4ebd4c8993f3dd30e93a9"
I0120 14:06:03.745730 1061268 logs.go:123] Gathering logs for kube-scheduler [09950dfdbb046a2d1cf3eba5089e996dfe6231845671d427e66b0cad90bf8f29] ...
I0120 14:06:03.745775 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09950dfdbb046a2d1cf3eba5089e996dfe6231845671d427e66b0cad90bf8f29"
I0120 14:06:03.786902 1061268 logs.go:123] Gathering logs for kube-proxy [3197926b668d65c9cc4278471fe0c989cedd9bdf1e9c87350cb5ded287057802] ...
I0120 14:06:03.786940 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3197926b668d65c9cc4278471fe0c989cedd9bdf1e9c87350cb5ded287057802"
I0120 14:06:03.830070 1061268 logs.go:123] Gathering logs for storage-provisioner [192cb1e4fd5fa6b6311d1fe52195efc15d7ce27cfa9ed0ea912b325201ed9ca8] ...
I0120 14:06:03.830115 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 192cb1e4fd5fa6b6311d1fe52195efc15d7ce27cfa9ed0ea912b325201ed9ca8"
I0120 14:06:03.874536 1061268 logs.go:123] Gathering logs for storage-provisioner [f6b0dd0725bbc2262c7577838d0fa44c98e0b2efaad3fa074a4c33ec86c8aa22] ...
I0120 14:06:03.874594 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6b0dd0725bbc2262c7577838d0fa44c98e0b2efaad3fa074a4c33ec86c8aa22"
I0120 14:06:03.915750 1061268 logs.go:123] Gathering logs for kube-proxy [aa534af95004b3fe077a52a38cb923b05fe406529dc0c1e243a6cc8ae8cf9c33] ...
I0120 14:06:03.915784 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa534af95004b3fe077a52a38cb923b05fe406529dc0c1e243a6cc8ae8cf9c33"
I0120 14:06:03.956123 1061268 logs.go:123] Gathering logs for kube-controller-manager [025055b322462e565ed23f8ebb14ed974477c0e5971086843408d0e8e8cda1d3] ...
I0120 14:06:03.956162 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 025055b322462e565ed23f8ebb14ed974477c0e5971086843408d0e8e8cda1d3"
I0120 14:06:04.016008 1061268 logs.go:123] Gathering logs for kubernetes-dashboard [6b9e25a753ca8553bec8243a9c3fa594e3662dcd41a394825afe800e98d90a39] ...
I0120 14:06:04.016059 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b9e25a753ca8553bec8243a9c3fa594e3662dcd41a394825afe800e98d90a39"
I0120 14:06:04.060273 1061268 logs.go:123] Gathering logs for describe nodes ...
I0120 14:06:04.060312 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0120 14:06:04.188515 1061268 logs.go:123] Gathering logs for kube-apiserver [a9ed8990e1e45ed3e66d50d5186fd18a9a49d174764d13518a367c70af79ac42] ...
I0120 14:06:04.188571 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9ed8990e1e45ed3e66d50d5186fd18a9a49d174764d13518a367c70af79ac42"
I0120 14:06:04.236379 1061268 logs.go:123] Gathering logs for kube-apiserver [9408104b2bc8b50ee9af342f70bc2efe4f5d0d8ed725752d2410341e89eaf2d3] ...
I0120 14:06:04.236416 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9408104b2bc8b50ee9af342f70bc2efe4f5d0d8ed725752d2410341e89eaf2d3"
I0120 14:06:04.290511 1061268 logs.go:123] Gathering logs for etcd [02f8fd9a4d7f7489ef0d4dae899392452870298c4f4f3fc4dbc49bbb093fa8c4] ...
I0120 14:06:04.290552 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02f8fd9a4d7f7489ef0d4dae899392452870298c4f4f3fc4dbc49bbb093fa8c4"
I0120 14:06:04.344991 1061268 logs.go:123] Gathering logs for kube-scheduler [d1a5df859c03ba84889ee72da494516a50cc1bf133273d47bac6178c72fa7943] ...
I0120 14:06:04.345034 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1a5df859c03ba84889ee72da494516a50cc1bf133273d47bac6178c72fa7943"
I0120 14:06:04.409146 1061268 logs.go:123] Gathering logs for containerd ...
I0120 14:06:04.409193 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0120 14:06:04.469681 1061268 logs.go:123] Gathering logs for kubelet ...
I0120 14:06:04.469730 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0120 14:06:04.551443 1061268 logs.go:123] Gathering logs for dmesg ...
I0120 14:06:04.551486 1061268 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0120 14:06:07.075113 1061268 system_pods.go:59] 8 kube-system pods found
I0120 14:06:07.075149 1061268 system_pods.go:61] "coredns-668d6bf9bc-j4tcz" [ec868aad-83ba-424b-9c45-f01cb97dbf5c] Running
I0120 14:06:07.075154 1061268 system_pods.go:61] "etcd-default-k8s-diff-port-901416" [4b431891-d618-45f1-9818-02abb09dc774] Running
I0120 14:06:07.075161 1061268 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-901416" [2aa81ce3-8c3f-454a-aa5d-ad52e56f16b6] Running
I0120 14:06:07.075164 1061268 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-901416" [f937feab-0776-4a1e-8a99-659250ad2bfb] Running
I0120 14:06:07.075167 1061268 system_pods.go:61] "kube-proxy-6v2v7" [53d00002-be0a-4f71-97d2-607e482c5bfd] Running
I0120 14:06:07.075170 1061268 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-901416" [525530a5-8789-4916-967a-6e976e91ccb3] Running
I0120 14:06:07.075177 1061268 system_pods.go:61] "metrics-server-f79f97bbb-nfwzt" [ba691a4d-ec1c-4929-ab0e-58fb2e485165] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0120 14:06:07.075181 1061268 system_pods.go:61] "storage-provisioner" [6d5f71d2-7d29-4a8b-ad69-8ad65b9565f6] Running
I0120 14:06:07.075189 1061268 system_pods.go:74] duration metric: took 4.072972909s to wait for pod list to return data ...
I0120 14:06:07.075199 1061268 default_sa.go:34] waiting for default service account to be created ...
I0120 14:06:07.077984 1061268 default_sa.go:45] found service account: "default"
I0120 14:06:07.078010 1061268 default_sa.go:55] duration metric: took 2.803991ms for default service account to be created ...
I0120 14:06:07.078018 1061268 system_pods.go:137] waiting for k8s-apps to be running ...
I0120 14:06:07.082687 1061268 system_pods.go:87] 8 kube-system pods found
I0120 14:06:07.086241 1061268 system_pods.go:105] "coredns-668d6bf9bc-j4tcz" [ec868aad-83ba-424b-9c45-f01cb97dbf5c] Running
I0120 14:06:07.086270 1061268 system_pods.go:105] "etcd-default-k8s-diff-port-901416" [4b431891-d618-45f1-9818-02abb09dc774] Running
I0120 14:06:07.086279 1061268 system_pods.go:105] "kube-apiserver-default-k8s-diff-port-901416" [2aa81ce3-8c3f-454a-aa5d-ad52e56f16b6] Running
I0120 14:06:07.086287 1061268 system_pods.go:105] "kube-controller-manager-default-k8s-diff-port-901416" [f937feab-0776-4a1e-8a99-659250ad2bfb] Running
I0120 14:06:07.086293 1061268 system_pods.go:105] "kube-proxy-6v2v7" [53d00002-be0a-4f71-97d2-607e482c5bfd] Running
I0120 14:06:07.086299 1061268 system_pods.go:105] "kube-scheduler-default-k8s-diff-port-901416" [525530a5-8789-4916-967a-6e976e91ccb3] Running
I0120 14:06:07.086312 1061268 system_pods.go:105] "metrics-server-f79f97bbb-nfwzt" [ba691a4d-ec1c-4929-ab0e-58fb2e485165] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0120 14:06:07.086321 1061268 system_pods.go:105] "storage-provisioner" [6d5f71d2-7d29-4a8b-ad69-8ad65b9565f6] Running
I0120 14:06:07.086334 1061268 system_pods.go:147] duration metric: took 8.307949ms to wait for k8s-apps to be running ...
I0120 14:06:07.086345 1061268 system_svc.go:44] waiting for kubelet service to be running ....
I0120 14:06:07.086398 1061268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0120 14:06:07.103417 1061268 system_svc.go:56] duration metric: took 17.063515ms WaitForService to wait for kubelet
I0120 14:06:07.103451 1061268 kubeadm.go:582] duration metric: took 4m19.921060894s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0120 14:06:07.103481 1061268 node_conditions.go:102] verifying NodePressure condition ...
I0120 14:06:07.107665 1061268 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0120 14:06:07.107690 1061268 node_conditions.go:123] node cpu capacity is 2
I0120 14:06:07.107704 1061268 node_conditions.go:105] duration metric: took 4.218612ms to run NodePressure ...
I0120 14:06:07.107717 1061268 start.go:241] waiting for startup goroutines ...
I0120 14:06:07.107724 1061268 start.go:246] waiting for cluster config update ...
I0120 14:06:07.107735 1061268 start.go:255] writing updated cluster config ...
I0120 14:06:07.108022 1061268 ssh_runner.go:195] Run: rm -f paused
I0120 14:06:07.161569 1061268 start.go:600] kubectl: 1.32.1, cluster: 1.32.0 (minor skew: 0)
I0120 14:06:07.163860 1061268 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-901416" cluster and "default" namespace by default
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
90e122ccfe167 523cad1a4df73 About a minute ago Exited dashboard-metrics-scraper 9 07210fc27dc62 dashboard-metrics-scraper-86c6bf9756-7w7lb
aa71aaebc59f9 07655ddf2eebe 21 minutes ago Running kubernetes-dashboard 0 35104563f83fe kubernetes-dashboard-7779f9b69b-vcbk9
f3a40f7f95672 6e38f40d628db 21 minutes ago Running storage-provisioner 0 1c74e8c1fdafc storage-provisioner
7bc12f446e72d c69fa2e9cbf5f 21 minutes ago Running coredns 0 1f79084cba5c8 coredns-668d6bf9bc-6dk7s
4d1a4fdda2e14 c69fa2e9cbf5f 21 minutes ago Running coredns 0 c2c50aa0c057b coredns-668d6bf9bc-88phd
064833c57608a 040f9f8aac8cd 21 minutes ago Running kube-proxy 0 d2267e69e323c kube-proxy-p5rcq
050793e2ff918 8cab3d2a8bd0f 21 minutes ago Running kube-controller-manager 2 16f3ec6463e28 kube-controller-manager-embed-certs-553677
5af45fd19b3a6 c2e17b8d0f4a3 21 minutes ago Running kube-apiserver 2 edb37c69c017c kube-apiserver-embed-certs-553677
f3a74e677451d a389e107f4ff1 21 minutes ago Running kube-scheduler 2 a77dd60ee5de8 kube-scheduler-embed-certs-553677
538390e842743 a9e7e6b294baf 21 minutes ago Running etcd 2 079cf5b17f8a6 etcd-embed-certs-553677
==> containerd <==
Jan 20 14:21:04 embed-certs-553677 containerd[566]: time="2025-01-20T14:21:04.534294224Z" level=info msg="CreateContainer within sandbox \"07210fc27dc628b6dc419419431fe45b47634a6e12a353edf43b67d2cdb1da85\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,} returns container id \"49d9b9c29c80d2d64ee66c881be5b22760cbf284830b0af5b85ed63850c74c04\""
Jan 20 14:21:04 embed-certs-553677 containerd[566]: time="2025-01-20T14:21:04.535518905Z" level=info msg="StartContainer for \"49d9b9c29c80d2d64ee66c881be5b22760cbf284830b0af5b85ed63850c74c04\""
Jan 20 14:21:04 embed-certs-553677 containerd[566]: time="2025-01-20T14:21:04.617797496Z" level=info msg="StartContainer for \"49d9b9c29c80d2d64ee66c881be5b22760cbf284830b0af5b85ed63850c74c04\" returns successfully"
Jan 20 14:21:04 embed-certs-553677 containerd[566]: time="2025-01-20T14:21:04.690252358Z" level=info msg="shim disconnected" id=49d9b9c29c80d2d64ee66c881be5b22760cbf284830b0af5b85ed63850c74c04 namespace=k8s.io
Jan 20 14:21:04 embed-certs-553677 containerd[566]: time="2025-01-20T14:21:04.690381193Z" level=warning msg="cleaning up after shim disconnected" id=49d9b9c29c80d2d64ee66c881be5b22760cbf284830b0af5b85ed63850c74c04 namespace=k8s.io
Jan 20 14:21:04 embed-certs-553677 containerd[566]: time="2025-01-20T14:21:04.690433839Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 20 14:21:05 embed-certs-553677 containerd[566]: time="2025-01-20T14:21:05.084637273Z" level=info msg="RemoveContainer for \"f1c4781239f0e7cc966e2d446499da901e04b88d09396170fcbfad1da9597285\""
Jan 20 14:21:05 embed-certs-553677 containerd[566]: time="2025-01-20T14:21:05.093136508Z" level=info msg="RemoveContainer for \"f1c4781239f0e7cc966e2d446499da901e04b88d09396170fcbfad1da9597285\" returns successfully"
Jan 20 14:21:36 embed-certs-553677 containerd[566]: time="2025-01-20T14:21:36.506463132Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 14:21:36 embed-certs-553677 containerd[566]: time="2025-01-20T14:21:36.538989240Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
Jan 20 14:21:36 embed-certs-553677 containerd[566]: time="2025-01-20T14:21:36.541541272Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
Jan 20 14:21:36 embed-certs-553677 containerd[566]: time="2025-01-20T14:21:36.541662977Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Jan 20 14:26:06 embed-certs-553677 containerd[566]: time="2025-01-20T14:26:06.508261940Z" level=info msg="CreateContainer within sandbox \"07210fc27dc628b6dc419419431fe45b47634a6e12a353edf43b67d2cdb1da85\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,}"
Jan 20 14:26:06 embed-certs-553677 containerd[566]: time="2025-01-20T14:26:06.556225963Z" level=info msg="CreateContainer within sandbox \"07210fc27dc628b6dc419419431fe45b47634a6e12a353edf43b67d2cdb1da85\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,} returns container id \"90e122ccfe167e5beebf3d4c6f4d3404543f731231e5d6c6d48c95f61a0ce9f5\""
Jan 20 14:26:06 embed-certs-553677 containerd[566]: time="2025-01-20T14:26:06.565617835Z" level=info msg="StartContainer for \"90e122ccfe167e5beebf3d4c6f4d3404543f731231e5d6c6d48c95f61a0ce9f5\""
Jan 20 14:26:06 embed-certs-553677 containerd[566]: time="2025-01-20T14:26:06.697109852Z" level=info msg="StartContainer for \"90e122ccfe167e5beebf3d4c6f4d3404543f731231e5d6c6d48c95f61a0ce9f5\" returns successfully"
Jan 20 14:26:06 embed-certs-553677 containerd[566]: time="2025-01-20T14:26:06.750723833Z" level=info msg="shim disconnected" id=90e122ccfe167e5beebf3d4c6f4d3404543f731231e5d6c6d48c95f61a0ce9f5 namespace=k8s.io
Jan 20 14:26:06 embed-certs-553677 containerd[566]: time="2025-01-20T14:26:06.750796519Z" level=warning msg="cleaning up after shim disconnected" id=90e122ccfe167e5beebf3d4c6f4d3404543f731231e5d6c6d48c95f61a0ce9f5 namespace=k8s.io
Jan 20 14:26:06 embed-certs-553677 containerd[566]: time="2025-01-20T14:26:06.750806552Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 20 14:26:06 embed-certs-553677 containerd[566]: time="2025-01-20T14:26:06.831955930Z" level=info msg="RemoveContainer for \"49d9b9c29c80d2d64ee66c881be5b22760cbf284830b0af5b85ed63850c74c04\""
Jan 20 14:26:06 embed-certs-553677 containerd[566]: time="2025-01-20T14:26:06.838650816Z" level=info msg="RemoveContainer for \"49d9b9c29c80d2d64ee66c881be5b22760cbf284830b0af5b85ed63850c74c04\" returns successfully"
Jan 20 14:26:49 embed-certs-553677 containerd[566]: time="2025-01-20T14:26:49.504719521Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 20 14:26:49 embed-certs-553677 containerd[566]: time="2025-01-20T14:26:49.527169274Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
Jan 20 14:26:49 embed-certs-553677 containerd[566]: time="2025-01-20T14:26:49.529828576Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
Jan 20 14:26:49 embed-certs-553677 containerd[566]: time="2025-01-20T14:26:49.529856350Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
==> coredns [4d1a4fdda2e1453c6a2cbe67869cc5361f63a5d8d0849b836d4ef4563b425223] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
CoreDNS-1.11.3
linux/amd64, go1.21.11, a6338e9
==> coredns [7bc12f446e72dcf6c0cc56dea29b424f21c189627cd06fef036baabc8bfd7896] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
CoreDNS-1.11.3
linux/amd64, go1.21.11, a6338e9
==> describe nodes <==
Name: embed-certs-553677
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=embed-certs-553677
kubernetes.io/os=linux
minikube.k8s.io/commit=f660fd437a405b9b88cc818704e12bd22ce270c3
minikube.k8s.io/name=embed-certs-553677
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_01_20T14_05_23_0700
minikube.k8s.io/version=v1.35.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 20 Jan 2025 14:05:19 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: embed-certs-553677
AcquireTime: <unset>
RenewTime: Mon, 20 Jan 2025 14:27:08 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 20 Jan 2025 14:26:27 +0000 Mon, 20 Jan 2025 14:05:16 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 20 Jan 2025 14:26:27 +0000 Mon, 20 Jan 2025 14:05:16 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 20 Jan 2025 14:26:27 +0000 Mon, 20 Jan 2025 14:05:16 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 20 Jan 2025 14:26:27 +0000 Mon, 20 Jan 2025 14:05:19 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.72.136
Hostname: embed-certs-553677
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164184Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164184Ki
pods: 110
System Info:
Machine ID: afa001d7b3024a01a82fa78feaf4cee9
System UUID: afa001d7-b302-4a01-a82f-a78feaf4cee9
Boot ID: 3d5c3b4b-1f08-4d28-840b-a8710e76bcea
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.7.23
Kubelet Version: v1.32.0
Kube-Proxy Version: v1.32.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (11 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-668d6bf9bc-6dk7s 100m (5%) 0 (0%) 70Mi (3%) 170Mi (8%) 21m
kube-system coredns-668d6bf9bc-88phd 100m (5%) 0 (0%) 70Mi (3%) 170Mi (8%) 21m
kube-system etcd-embed-certs-553677 100m (5%) 0 (0%) 100Mi (4%) 0 (0%) 21m
kube-system kube-apiserver-embed-certs-553677 250m (12%) 0 (0%) 0 (0%) 0 (0%) 21m
kube-system kube-controller-manager-embed-certs-553677 200m (10%) 0 (0%) 0 (0%) 0 (0%) 21m
kube-system kube-proxy-p5rcq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 21m
kube-system kube-scheduler-embed-certs-553677 100m (5%) 0 (0%) 0 (0%) 0 (0%) 21m
kube-system metrics-server-f79f97bbb-b92sv 100m (5%) 0 (0%) 200Mi (9%) 0 (0%) 21m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 21m
kubernetes-dashboard dashboard-metrics-scraper-86c6bf9756-7w7lb 0 (0%) 0 (0%) 0 (0%) 0 (0%) 21m
kubernetes-dashboard kubernetes-dashboard-7779f9b69b-vcbk9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 21m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 0 (0%)
memory 440Mi (20%) 340Mi (16%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 21m kube-proxy
Normal NodeHasSufficientMemory 21m (x8 over 21m) kubelet Node embed-certs-553677 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 21m (x8 over 21m) kubelet Node embed-certs-553677 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 21m (x7 over 21m) kubelet Node embed-certs-553677 status is now: NodeHasSufficientPID
Normal Starting 21m kubelet Starting kubelet.
Normal NodeAllocatableEnforced 21m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 21m kubelet Node embed-certs-553677 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 21m kubelet Node embed-certs-553677 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 21m kubelet Node embed-certs-553677 status is now: NodeHasSufficientPID
Normal RegisteredNode 21m node-controller Node embed-certs-553677 event: Registered Node embed-certs-553677 in Controller
==> dmesg <==
[ +0.053332] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
[ +0.042725] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +5.047663] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +3.023557] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
[ +1.720652] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[ +7.404676] systemd-fstab-generator[488]: Ignoring "noauto" option for root device
[ +0.066653] kauditd_printk_skb: 1 callbacks suppressed
[ +0.069864] systemd-fstab-generator[500]: Ignoring "noauto" option for root device
[ +0.199782] systemd-fstab-generator[514]: Ignoring "noauto" option for root device
[ +0.171364] systemd-fstab-generator[526]: Ignoring "noauto" option for root device
[ +0.356820] systemd-fstab-generator[558]: Ignoring "noauto" option for root device
[ +1.642366] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
[ +2.261960] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
[ +0.294436] kauditd_printk_skb: 217 callbacks suppressed
[ +5.464773] kauditd_printk_skb: 38 callbacks suppressed
[Jan20 14:01] kauditd_printk_skb: 91 callbacks suppressed
[Jan20 14:05] systemd-fstab-generator[3057]: Ignoring "noauto" option for root device
[ +1.718830] kauditd_printk_skb: 82 callbacks suppressed
[ +5.888402] systemd-fstab-generator[3421]: Ignoring "noauto" option for root device
[ +4.442390] systemd-fstab-generator[3512]: Ignoring "noauto" option for root device
[ +0.686341] kauditd_printk_skb: 34 callbacks suppressed
[ +7.733071] kauditd_printk_skb: 90 callbacks suppressed
[ +5.501271] kauditd_printk_skb: 8 callbacks suppressed
==> etcd [538390e8427430eed2b6e4bf3b12641221cd77efecc9084f454604fecfbeb222] <==
{"level":"info","ts":"2025-01-20T14:05:16.916542Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2025-01-20T14:05:16.919019Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2025-01-20T14:05:16.919830Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2025-01-20T14:05:16.920661Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2025-01-20T14:05:40.309184Z","caller":"traceutil/trace.go:171","msg":"trace[275623798] transaction","detail":"{read_only:false; response_revision:537; number_of_response:1; }","duration":"528.759205ms","start":"2025-01-20T14:05:39.778403Z","end":"2025-01-20T14:05:40.307162Z","steps":["trace[275623798] 'process raft request' (duration: 528.641002ms)"],"step_count":1}
{"level":"warn","ts":"2025-01-20T14:05:40.321410Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-20T14:05:39.778376Z","time spent":"533.123963ms","remote":"127.0.0.1:44246","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":683,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-q327nomwnwzpte6jv5e2j5y5c4\" mod_revision:433 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-q327nomwnwzpte6jv5e2j5y5c4\" value_size:610 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-q327nomwnwzpte6jv5e2j5y5c4\" > >"}
{"level":"info","ts":"2025-01-20T14:05:40.378602Z","caller":"traceutil/trace.go:171","msg":"trace[1052250938] transaction","detail":"{read_only:false; response_revision:538; number_of_response:1; }","duration":"597.260719ms","start":"2025-01-20T14:05:39.781325Z","end":"2025-01-20T14:05:40.378586Z","steps":["trace[1052250938] 'process raft request' (duration: 593.417016ms)"],"step_count":1}
{"level":"warn","ts":"2025-01-20T14:05:40.378775Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-20T14:05:39.781282Z","time spent":"597.435761ms","remote":"127.0.0.1:44132","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:534 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
{"level":"warn","ts":"2025-01-20T14:05:40.379928Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"558.853213ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-01-20T14:05:40.380725Z","caller":"traceutil/trace.go:171","msg":"trace[1909386249] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:538; }","duration":"559.681584ms","start":"2025-01-20T14:05:39.821033Z","end":"2025-01-20T14:05:40.380715Z","steps":["trace[1909386249] 'agreement among raft nodes before linearized reading' (duration: 558.748392ms)"],"step_count":1}
{"level":"warn","ts":"2025-01-20T14:05:40.381391Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-20T14:05:39.821017Z","time spent":"560.354765ms","remote":"127.0.0.1:44156","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
{"level":"info","ts":"2025-01-20T14:05:40.379852Z","caller":"traceutil/trace.go:171","msg":"trace[384467576] linearizableReadLoop","detail":"{readStateIndex:554; appliedIndex:553; }","duration":"557.218788ms","start":"2025-01-20T14:05:39.821079Z","end":"2025-01-20T14:05:40.378298Z","steps":["trace[384467576] 'read index received' (duration: 486.477234ms)","trace[384467576] 'applied index is now lower than readState.Index' (duration: 70.741009ms)"],"step_count":2}
{"level":"warn","ts":"2025-01-20T14:05:40.382044Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"413.910973ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-01-20T14:05:40.382086Z","caller":"traceutil/trace.go:171","msg":"trace[128622718] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:538; }","duration":"413.996684ms","start":"2025-01-20T14:05:39.968082Z","end":"2025-01-20T14:05:40.382079Z","steps":["trace[128622718] 'agreement among raft nodes before linearized reading' (duration: 413.897921ms)"],"step_count":1}
{"level":"warn","ts":"2025-01-20T14:05:40.382378Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.942541ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-01-20T14:05:40.382471Z","caller":"traceutil/trace.go:171","msg":"trace[1862852755] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:538; }","duration":"167.070504ms","start":"2025-01-20T14:05:40.215394Z","end":"2025-01-20T14:05:40.382464Z","steps":["trace[1862852755] 'agreement among raft nodes before linearized reading' (duration: 166.96143ms)"],"step_count":1}
{"level":"info","ts":"2025-01-20T14:15:17.599614Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":865}
{"level":"info","ts":"2025-01-20T14:15:17.644978Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":865,"took":"44.248618ms","hash":3046015032,"current-db-size-bytes":2932736,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":2932736,"current-db-size-in-use":"2.9 MB"}
{"level":"info","ts":"2025-01-20T14:15:17.645112Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3046015032,"revision":865,"compact-revision":-1}
{"level":"info","ts":"2025-01-20T14:20:17.608243Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1116}
{"level":"info","ts":"2025-01-20T14:20:17.613512Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1116,"took":"4.438362ms","hash":3116006083,"current-db-size-bytes":2932736,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1769472,"current-db-size-in-use":"1.8 MB"}
{"level":"info","ts":"2025-01-20T14:20:17.613817Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3116006083,"revision":1116,"compact-revision":865}
{"level":"info","ts":"2025-01-20T14:25:17.616521Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1368}
{"level":"info","ts":"2025-01-20T14:25:17.622018Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1368,"took":"4.396777ms","hash":763323634,"current-db-size-bytes":2932736,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1781760,"current-db-size-in-use":"1.8 MB"}
{"level":"info","ts":"2025-01-20T14:25:17.622098Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":763323634,"revision":1368,"compact-revision":1116}
==> kernel <==
14:27:10 up 26 min, 0 users, load average: 0.02, 0.16, 0.17
Linux embed-certs-553677 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2023.02.9"
==> kube-apiserver [5af45fd19b3a6433b0b011d77366c31a3a8c61d3527622e19f52c945a44ed255] <==
> logger="UnhandledError"
I0120 14:23:20.441545 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0120 14:25:19.440651 1 handler_proxy.go:99] no RequestInfo found in the context
E0120 14:25:19.441001 1 controller.go:146] "Unhandled Error" err=<
Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
W0120 14:25:20.443314 1 handler_proxy.go:99] no RequestInfo found in the context
E0120 14:25:20.443676 1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
W0120 14:25:20.443960 1 handler_proxy.go:99] no RequestInfo found in the context
E0120 14:25:20.444271 1 controller.go:102] "Unhandled Error" err=<
loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
I0120 14:25:20.445117 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0120 14:25:20.446335 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0120 14:26:20.446155 1 handler_proxy.go:99] no RequestInfo found in the context
E0120 14:26:20.446512 1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
W0120 14:26:20.446647 1 handler_proxy.go:99] no RequestInfo found in the context
E0120 14:26:20.446799 1 controller.go:102] "Unhandled Error" err=<
loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
I0120 14:26:20.448322 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0120 14:26:20.448368 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
==> kube-controller-manager [050793e2ff9184684b006b118ddbf73bfbbb3def7f332f79e31a733a246e93a7] <==
I0120 14:22:03.521818 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="198.307µs"
E0120 14:22:26.187306 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0120 14:22:26.301152 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0120 14:22:56.194463 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0120 14:22:56.309804 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0120 14:23:26.202042 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0120 14:23:26.318172 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0120 14:23:56.208710 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0120 14:23:56.331786 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0120 14:24:26.217184 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0120 14:24:26.340018 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0120 14:24:56.224512 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0120 14:24:56.351505 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0120 14:25:26.231136 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0120 14:25:26.362047 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0120 14:25:56.237527 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0120 14:25:56.377674 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
I0120 14:26:06.851029 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="372.17µs"
I0120 14:26:10.535147 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="102.298µs"
E0120 14:26:26.245147 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0120 14:26:26.385459 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
I0120 14:26:27.587010 1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="embed-certs-553677"
E0120 14:26:56.252342 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0120 14:26:56.395195 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
I0120 14:27:03.527481 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="155.198µs"
==> kube-proxy [064833c57608a9b9181fcc6a9d9b35b48ac3129395f162eeb3fbcbd8d61ab67e] <==
add table ip kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^
>
E0120 14:05:27.931307 1 proxier.go:733] "Error cleaning up nftables rules" err=<
could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
add table ip6 kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^^
>
I0120 14:05:27.957437 1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.72.136"]
E0120 14:05:27.957527 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0120 14:05:28.174669 1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
I0120 14:05:28.175283 1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0120 14:05:28.175539 1 server_linux.go:170] "Using iptables Proxier"
I0120 14:05:28.195706 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0120 14:05:28.204551 1 server.go:497] "Version info" version="v1.32.0"
I0120 14:05:28.209403 1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0120 14:05:28.214955 1 config.go:199] "Starting service config controller"
I0120 14:05:28.214995 1 shared_informer.go:313] Waiting for caches to sync for service config
I0120 14:05:28.215023 1 config.go:105] "Starting endpoint slice config controller"
I0120 14:05:28.215027 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0120 14:05:28.215788 1 config.go:329] "Starting node config controller"
I0120 14:05:28.215796 1 shared_informer.go:313] Waiting for caches to sync for node config
I0120 14:05:28.315699 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0120 14:05:28.315788 1 shared_informer.go:320] Caches are synced for service config
I0120 14:05:28.316146 1 shared_informer.go:320] Caches are synced for node config
==> kube-scheduler [f3a74e677451d5bed228f9f6297ebbd6bf5ab847fc34d9d171f66744d92aa03e] <==
W0120 14:05:20.380410 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0120 14:05:20.380696 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0120 14:05:20.413135 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0120 14:05:20.413440 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0120 14:05:20.464122 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0120 14:05:20.464431 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0120 14:05:20.509303 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0120 14:05:20.509593 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0120 14:05:20.575187 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0120 14:05:20.575225 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0120 14:05:20.669393 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0120 14:05:20.669849 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0120 14:05:20.730081 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0120 14:05:20.730184 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0120 14:05:20.801186 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0120 14:05:20.801666 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0120 14:05:20.806171 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
E0120 14:05:20.806483 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0120 14:05:20.810646 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0120 14:05:20.811053 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
W0120 14:05:20.859553 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0120 14:05:20.862180 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0120 14:05:20.879238 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0120 14:05:20.881971 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
I0120 14:05:22.749121 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Jan 20 14:26:06 embed-certs-553677 kubelet[3428]: I0120 14:26:06.830426 3428 scope.go:117] "RemoveContainer" containerID="90e122ccfe167e5beebf3d4c6f4d3404543f731231e5d6c6d48c95f61a0ce9f5"
Jan 20 14:26:06 embed-certs-553677 kubelet[3428]: E0120 14:26:06.830598 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-7w7lb_kubernetes-dashboard(9e767d13-3e6f-4197-b8cf-30e59870e4c5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-7w7lb" podUID="9e767d13-3e6f-4197-b8cf-30e59870e4c5"
Jan 20 14:26:10 embed-certs-553677 kubelet[3428]: I0120 14:26:10.515570 3428 scope.go:117] "RemoveContainer" containerID="90e122ccfe167e5beebf3d4c6f4d3404543f731231e5d6c6d48c95f61a0ce9f5"
Jan 20 14:26:10 embed-certs-553677 kubelet[3428]: E0120 14:26:10.515762 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-7w7lb_kubernetes-dashboard(9e767d13-3e6f-4197-b8cf-30e59870e4c5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-7w7lb" podUID="9e767d13-3e6f-4197-b8cf-30e59870e4c5"
Jan 20 14:26:13 embed-certs-553677 kubelet[3428]: E0120 14:26:13.504371 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-b92sv" podUID="f9b310a6-0d19-4084-aeae-ebe0a395d042"
Jan 20 14:26:22 embed-certs-553677 kubelet[3428]: I0120 14:26:22.504394 3428 scope.go:117] "RemoveContainer" containerID="90e122ccfe167e5beebf3d4c6f4d3404543f731231e5d6c6d48c95f61a0ce9f5"
Jan 20 14:26:22 embed-certs-553677 kubelet[3428]: E0120 14:26:22.504605 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-7w7lb_kubernetes-dashboard(9e767d13-3e6f-4197-b8cf-30e59870e4c5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-7w7lb" podUID="9e767d13-3e6f-4197-b8cf-30e59870e4c5"
Jan 20 14:26:22 embed-certs-553677 kubelet[3428]: E0120 14:26:22.552421 3428 iptables.go:577] "Could not set up iptables canary" err=<
Jan 20 14:26:22 embed-certs-553677 kubelet[3428]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
Jan 20 14:26:22 embed-certs-553677 kubelet[3428]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Jan 20 14:26:22 embed-certs-553677 kubelet[3428]: Perhaps ip6tables or your kernel needs to be upgraded.
Jan 20 14:26:22 embed-certs-553677 kubelet[3428]: > table="nat" chain="KUBE-KUBELET-CANARY"
Jan 20 14:26:24 embed-certs-553677 kubelet[3428]: E0120 14:26:24.504652 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-b92sv" podUID="f9b310a6-0d19-4084-aeae-ebe0a395d042"
Jan 20 14:26:33 embed-certs-553677 kubelet[3428]: I0120 14:26:33.503088 3428 scope.go:117] "RemoveContainer" containerID="90e122ccfe167e5beebf3d4c6f4d3404543f731231e5d6c6d48c95f61a0ce9f5"
Jan 20 14:26:33 embed-certs-553677 kubelet[3428]: E0120 14:26:33.503288 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-7w7lb_kubernetes-dashboard(9e767d13-3e6f-4197-b8cf-30e59870e4c5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-7w7lb" podUID="9e767d13-3e6f-4197-b8cf-30e59870e4c5"
Jan 20 14:26:36 embed-certs-553677 kubelet[3428]: E0120 14:26:36.504700 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-b92sv" podUID="f9b310a6-0d19-4084-aeae-ebe0a395d042"
Jan 20 14:26:47 embed-certs-553677 kubelet[3428]: I0120 14:26:47.504139 3428 scope.go:117] "RemoveContainer" containerID="90e122ccfe167e5beebf3d4c6f4d3404543f731231e5d6c6d48c95f61a0ce9f5"
Jan 20 14:26:47 embed-certs-553677 kubelet[3428]: E0120 14:26:47.504304 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-7w7lb_kubernetes-dashboard(9e767d13-3e6f-4197-b8cf-30e59870e4c5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-7w7lb" podUID="9e767d13-3e6f-4197-b8cf-30e59870e4c5"
Jan 20 14:26:49 embed-certs-553677 kubelet[3428]: E0120 14:26:49.530250 3428 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
Jan 20 14:26:49 embed-certs-553677 kubelet[3428]: E0120 14:26:49.530629 3428 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
Jan 20 14:26:49 embed-certs-553677 kubelet[3428]: E0120 14:26:49.531084 3428 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lt4r2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation
:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Std
in:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-f79f97bbb-b92sv_kube-system(f9b310a6-0d19-4084-aeae-ebe0a395d042): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
Jan 20 14:26:49 embed-certs-553677 kubelet[3428]: E0120 14:26:49.532666 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-b92sv" podUID="f9b310a6-0d19-4084-aeae-ebe0a395d042"
Jan 20 14:27:02 embed-certs-553677 kubelet[3428]: I0120 14:27:02.503787 3428 scope.go:117] "RemoveContainer" containerID="90e122ccfe167e5beebf3d4c6f4d3404543f731231e5d6c6d48c95f61a0ce9f5"
Jan 20 14:27:02 embed-certs-553677 kubelet[3428]: E0120 14:27:02.504083 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-7w7lb_kubernetes-dashboard(9e767d13-3e6f-4197-b8cf-30e59870e4c5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-7w7lb" podUID="9e767d13-3e6f-4197-b8cf-30e59870e4c5"
Jan 20 14:27:03 embed-certs-553677 kubelet[3428]: E0120 14:27:03.504724 3428 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-b92sv" podUID="f9b310a6-0d19-4084-aeae-ebe0a395d042"
==> kubernetes-dashboard [aa71aaebc59f9590fdc60ff9497fc4fc81c29c6979fd8605e7cc5aebe6bb547c] <==
2025/01/20 14:15:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 14:15:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 14:16:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 14:16:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 14:17:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 14:17:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 14:18:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 14:18:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 14:19:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 14:19:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 14:20:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 14:20:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 14:21:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 14:21:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 14:22:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 14:22:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 14:23:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 14:23:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 14:24:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 14:24:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 14:25:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 14:25:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 14:26:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 14:26:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/20 14:27:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
==> storage-provisioner [f3a40f7f9567275383954868f26b5113e242695b1aa9fc8ba6ba3fdba97915c9] <==
I0120 14:05:29.582932 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0120 14:05:29.626473 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0120 14:05:29.626834 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0120 14:05:29.644671 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0120 14:05:29.645594 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-553677_a045e4b7-35b0-4b64-a3d1-5f501c904876!
I0120 14:05:29.658176 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1be817fd-bc8a-4df2-9610-54e186f604de", APIVersion:"v1", ResourceVersion:"429", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-553677_a045e4b7-35b0-4b64-a3d1-5f501c904876 became leader
I0120 14:05:29.747944 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-553677_a045e4b7-35b0-4b64-a3d1-5f501c904876!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-553677 -n embed-certs-553677
helpers_test.go:261: (dbg) Run: kubectl --context embed-certs-553677 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-b92sv
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context embed-certs-553677 describe pod metrics-server-f79f97bbb-b92sv
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-553677 describe pod metrics-server-f79f97bbb-b92sv: exit status 1 (70.82065ms)
** stderr **
Error from server (NotFound): pods "metrics-server-f79f97bbb-b92sv" not found
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-553677 describe pod metrics-server-f79f97bbb-b92sv: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (1622.74s)