=== RUN TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run: out/minikube-linux-amd64 start -p no-preload-887091 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 02:57:21.896318 1064439 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/custom-flannel-541715/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-887091 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.32.1: signal: killed (26m25.952034554s)
-- stdout --
* [no-preload-887091] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=20316
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/20316-1057178/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-1057178/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the kvm2 driver based on existing profile
* Starting "no-preload-887091" primary control-plane node in "no-preload-887091" cluster
* Restarting existing kvm2 VM for "no-preload-887091" ...
* Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
* Configuring bridge CNI (Container Networking Interface) ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
- Using image fake.domain/registry.k8s.io/echoserver:1.4
- Using image docker.io/kubernetesui/dashboard:v2.7.0
- Using image registry.k8s.io/echoserver:1.4
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p no-preload-887091 addons enable metrics-server
* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
-- /stdout --
** stderr **
I0127 02:57:17.826407 1119007 out.go:345] Setting OutFile to fd 1 ...
I0127 02:57:17.826674 1119007 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 02:57:17.826684 1119007 out.go:358] Setting ErrFile to fd 2...
I0127 02:57:17.826688 1119007 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 02:57:17.826883 1119007 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-1057178/.minikube/bin
I0127 02:57:17.827437 1119007 out.go:352] Setting JSON to false
I0127 02:57:17.828461 1119007 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":13185,"bootTime":1737933453,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0127 02:57:17.828579 1119007 start.go:139] virtualization: kvm guest
I0127 02:57:17.830766 1119007 out.go:177] * [no-preload-887091] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
I0127 02:57:17.832244 1119007 out.go:177] - MINIKUBE_LOCATION=20316
I0127 02:57:17.832251 1119007 notify.go:220] Checking for updates...
I0127 02:57:17.834592 1119007 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0127 02:57:17.835787 1119007 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20316-1057178/kubeconfig
I0127 02:57:17.836899 1119007 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-1057178/.minikube
I0127 02:57:17.838103 1119007 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0127 02:57:17.839250 1119007 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0127 02:57:17.840874 1119007 config.go:182] Loaded profile config "no-preload-887091": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 02:57:17.841323 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 02:57:17.841397 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 02:57:17.856780 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46079
I0127 02:57:17.857232 1119007 main.go:141] libmachine: () Calling .GetVersion
I0127 02:57:17.857742 1119007 main.go:141] libmachine: Using API Version 1
I0127 02:57:17.857764 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 02:57:17.858054 1119007 main.go:141] libmachine: () Calling .GetMachineName
I0127 02:57:17.858248 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
I0127 02:57:17.858523 1119007 driver.go:394] Setting default libvirt URI to qemu:///system
I0127 02:57:17.858848 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 02:57:17.858902 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 02:57:17.873721 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41431
I0127 02:57:17.874168 1119007 main.go:141] libmachine: () Calling .GetVersion
I0127 02:57:17.874629 1119007 main.go:141] libmachine: Using API Version 1
I0127 02:57:17.874660 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 02:57:17.874957 1119007 main.go:141] libmachine: () Calling .GetMachineName
I0127 02:57:17.875141 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
I0127 02:57:17.911317 1119007 out.go:177] * Using the kvm2 driver based on existing profile
I0127 02:57:17.912538 1119007 start.go:297] selected driver: kvm2
I0127 02:57:17.912554 1119007 start.go:901] validating driver "kvm2" against &{Name:no-preload-887091 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-887091 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.201 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0127 02:57:17.912724 1119007 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0127 02:57:17.913732 1119007 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 02:57:17.913823 1119007 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20316-1057178/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0127 02:57:17.929134 1119007 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
I0127 02:57:17.929668 1119007 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0127 02:57:17.929707 1119007 cni.go:84] Creating CNI manager for ""
I0127 02:57:17.929753 1119007 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 02:57:17.929790 1119007 start.go:340] cluster config:
{Name:no-preload-887091 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-887091 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.201 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-hos
t Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0127 02:57:17.929898 1119007 iso.go:125] acquiring lock: {Name:mkd30bc9d11f9170e89ad95ce7ba25fa6d1e04f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 02:57:17.932066 1119007 out.go:177] * Starting "no-preload-887091" primary control-plane node in "no-preload-887091" cluster
I0127 02:57:17.933218 1119007 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0127 02:57:17.933354 1119007 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/no-preload-887091/config.json ...
I0127 02:57:17.933496 1119007 cache.go:107] acquiring lock: {Name:mkaf3b489bfd6dc421a2fa86abe9d65b6bff11ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 02:57:17.933498 1119007 cache.go:107] acquiring lock: {Name:mkf36fb3c7936dc43a7accf4d09084c009e59a41 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 02:57:17.933551 1119007 cache.go:107] acquiring lock: {Name:mkf165b974752458ff0611cfb9775fd80f2c97e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 02:57:17.933531 1119007 cache.go:107] acquiring lock: {Name:mkc9cd8f58fe1b37748c7212f0269bf025f162f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 02:57:17.933600 1119007 cache.go:115] /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
I0127 02:57:17.933612 1119007 cache.go:115] /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
I0127 02:57:17.933614 1119007 cache.go:115] /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 exists
I0127 02:57:17.933500 1119007 cache.go:107] acquiring lock: {Name:mk60aac71096a73a7daed4ed978fcb744e76477d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 02:57:17.933624 1119007 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 144.173µs
I0127 02:57:17.933633 1119007 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.1" -> "/home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1" took 83.685µs
I0127 02:57:17.933642 1119007 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.1 -> /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 succeeded
I0127 02:57:17.933644 1119007 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
I0127 02:57:17.933610 1119007 start.go:360] acquireMachinesLock for no-preload-887091: {Name:mka8dc154c517d64837d06e2f84f8bddd0b82c58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0127 02:57:17.933665 1119007 cache.go:115] /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 exists
I0127 02:57:17.933671 1119007 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.1" -> "/home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1" took 186.712µs
I0127 02:57:17.933676 1119007 start.go:364] duration metric: took 17.398µs to acquireMachinesLock for "no-preload-887091"
I0127 02:57:17.933680 1119007 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.1 -> /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 succeeded
I0127 02:57:17.933620 1119007 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 143.603µs
I0127 02:57:17.933689 1119007 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
I0127 02:57:17.933693 1119007 start.go:96] Skipping create...Using existing machine configuration
I0127 02:57:17.933689 1119007 cache.go:115] /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 exists
I0127 02:57:17.933700 1119007 fix.go:54] fixHost starting:
I0127 02:57:17.933670 1119007 cache.go:107] acquiring lock: {Name:mk67516821ece3ab5011ba3de57f5e4304385ce1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 02:57:17.933705 1119007 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.1" -> "/home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1" took 209.742µs
I0127 02:57:17.933723 1119007 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.1 -> /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 succeeded
I0127 02:57:17.933729 1119007 cache.go:107] acquiring lock: {Name:mk8c8166121360e55636f1daf7b49e8ae0fd0b6d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 02:57:17.933775 1119007 cache.go:115] /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 exists
I0127 02:57:17.933747 1119007 cache.go:107] acquiring lock: {Name:mk99ee89a947dcdbf6fe1f2b02e866da7649a3da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 02:57:17.933785 1119007 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.1" -> "/home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1" took 168.665µs
I0127 02:57:17.933800 1119007 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.1 -> /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 succeeded
I0127 02:57:17.933879 1119007 cache.go:115] /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 exists
I0127 02:57:17.933898 1119007 cache.go:115] /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
I0127 02:57:17.933897 1119007 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0" took 221.731µs
I0127 02:57:17.933910 1119007 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 209.265µs
I0127 02:57:17.933918 1119007 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 succeeded
I0127 02:57:17.933920 1119007 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
I0127 02:57:17.933928 1119007 cache.go:87] Successfully saved all images to host disk.
I0127 02:57:17.934028 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 02:57:17.934063 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 02:57:17.949426 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37949
I0127 02:57:17.949868 1119007 main.go:141] libmachine: () Calling .GetVersion
I0127 02:57:17.950384 1119007 main.go:141] libmachine: Using API Version 1
I0127 02:57:17.950420 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 02:57:17.950776 1119007 main.go:141] libmachine: () Calling .GetMachineName
I0127 02:57:17.951024 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
I0127 02:57:17.951257 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetState
I0127 02:57:17.953016 1119007 fix.go:112] recreateIfNeeded on no-preload-887091: state=Stopped err=<nil>
I0127 02:57:17.953039 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
W0127 02:57:17.953189 1119007 fix.go:138] unexpected machine state, will restart: <nil>
I0127 02:57:17.954893 1119007 out.go:177] * Restarting existing kvm2 VM for "no-preload-887091" ...
I0127 02:57:17.956110 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Start
I0127 02:57:17.956288 1119007 main.go:141] libmachine: (no-preload-887091) starting domain...
I0127 02:57:17.956312 1119007 main.go:141] libmachine: (no-preload-887091) ensuring networks are active...
I0127 02:57:17.956990 1119007 main.go:141] libmachine: (no-preload-887091) Ensuring network default is active
I0127 02:57:17.957364 1119007 main.go:141] libmachine: (no-preload-887091) Ensuring network mk-no-preload-887091 is active
I0127 02:57:17.957797 1119007 main.go:141] libmachine: (no-preload-887091) getting domain XML...
I0127 02:57:17.958664 1119007 main.go:141] libmachine: (no-preload-887091) creating domain...
I0127 02:57:19.173205 1119007 main.go:141] libmachine: (no-preload-887091) waiting for IP...
I0127 02:57:19.174169 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:19.174658 1119007 main.go:141] libmachine: (no-preload-887091) DBG | unable to find current IP address of domain no-preload-887091 in network mk-no-preload-887091
I0127 02:57:19.174730 1119007 main.go:141] libmachine: (no-preload-887091) DBG | I0127 02:57:19.174644 1119042 retry.go:31] will retry after 202.79074ms: waiting for domain to come up
I0127 02:57:19.379134 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:19.379647 1119007 main.go:141] libmachine: (no-preload-887091) DBG | unable to find current IP address of domain no-preload-887091 in network mk-no-preload-887091
I0127 02:57:19.379677 1119007 main.go:141] libmachine: (no-preload-887091) DBG | I0127 02:57:19.379625 1119042 retry.go:31] will retry after 302.512758ms: waiting for domain to come up
I0127 02:57:19.684226 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:19.684853 1119007 main.go:141] libmachine: (no-preload-887091) DBG | unable to find current IP address of domain no-preload-887091 in network mk-no-preload-887091
I0127 02:57:19.684883 1119007 main.go:141] libmachine: (no-preload-887091) DBG | I0127 02:57:19.684803 1119042 retry.go:31] will retry after 351.89473ms: waiting for domain to come up
I0127 02:57:20.038122 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:20.038605 1119007 main.go:141] libmachine: (no-preload-887091) DBG | unable to find current IP address of domain no-preload-887091 in network mk-no-preload-887091
I0127 02:57:20.038673 1119007 main.go:141] libmachine: (no-preload-887091) DBG | I0127 02:57:20.038579 1119042 retry.go:31] will retry after 476.247327ms: waiting for domain to come up
I0127 02:57:20.516437 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:20.517032 1119007 main.go:141] libmachine: (no-preload-887091) DBG | unable to find current IP address of domain no-preload-887091 in network mk-no-preload-887091
I0127 02:57:20.517067 1119007 main.go:141] libmachine: (no-preload-887091) DBG | I0127 02:57:20.516999 1119042 retry.go:31] will retry after 736.862022ms: waiting for domain to come up
I0127 02:57:21.256068 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:21.256666 1119007 main.go:141] libmachine: (no-preload-887091) DBG | unable to find current IP address of domain no-preload-887091 in network mk-no-preload-887091
I0127 02:57:21.256691 1119007 main.go:141] libmachine: (no-preload-887091) DBG | I0127 02:57:21.256633 1119042 retry.go:31] will retry after 716.788959ms: waiting for domain to come up
I0127 02:57:21.975003 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:21.975580 1119007 main.go:141] libmachine: (no-preload-887091) DBG | unable to find current IP address of domain no-preload-887091 in network mk-no-preload-887091
I0127 02:57:21.975612 1119007 main.go:141] libmachine: (no-preload-887091) DBG | I0127 02:57:21.975554 1119042 retry.go:31] will retry after 798.105294ms: waiting for domain to come up
I0127 02:57:22.774811 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:22.775311 1119007 main.go:141] libmachine: (no-preload-887091) DBG | unable to find current IP address of domain no-preload-887091 in network mk-no-preload-887091
I0127 02:57:22.775337 1119007 main.go:141] libmachine: (no-preload-887091) DBG | I0127 02:57:22.775283 1119042 retry.go:31] will retry after 1.275835327s: waiting for domain to come up
I0127 02:57:24.052218 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:24.052768 1119007 main.go:141] libmachine: (no-preload-887091) DBG | unable to find current IP address of domain no-preload-887091 in network mk-no-preload-887091
I0127 02:57:24.052804 1119007 main.go:141] libmachine: (no-preload-887091) DBG | I0127 02:57:24.052759 1119042 retry.go:31] will retry after 1.463923822s: waiting for domain to come up
I0127 02:57:25.518368 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:25.518950 1119007 main.go:141] libmachine: (no-preload-887091) DBG | unable to find current IP address of domain no-preload-887091 in network mk-no-preload-887091
I0127 02:57:25.518982 1119007 main.go:141] libmachine: (no-preload-887091) DBG | I0127 02:57:25.518889 1119042 retry.go:31] will retry after 1.710831863s: waiting for domain to come up
I0127 02:57:27.231833 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:27.232414 1119007 main.go:141] libmachine: (no-preload-887091) DBG | unable to find current IP address of domain no-preload-887091 in network mk-no-preload-887091
I0127 02:57:27.232450 1119007 main.go:141] libmachine: (no-preload-887091) DBG | I0127 02:57:27.232365 1119042 retry.go:31] will retry after 2.473402712s: waiting for domain to come up
I0127 02:57:29.707356 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:29.708097 1119007 main.go:141] libmachine: (no-preload-887091) DBG | unable to find current IP address of domain no-preload-887091 in network mk-no-preload-887091
I0127 02:57:29.708163 1119007 main.go:141] libmachine: (no-preload-887091) DBG | I0127 02:57:29.708083 1119042 retry.go:31] will retry after 2.914089375s: waiting for domain to come up
I0127 02:57:32.623312 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:32.623781 1119007 main.go:141] libmachine: (no-preload-887091) DBG | unable to find current IP address of domain no-preload-887091 in network mk-no-preload-887091
I0127 02:57:32.623811 1119007 main.go:141] libmachine: (no-preload-887091) DBG | I0127 02:57:32.623748 1119042 retry.go:31] will retry after 4.217598377s: waiting for domain to come up
I0127 02:57:36.845771 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:36.846275 1119007 main.go:141] libmachine: (no-preload-887091) found domain IP: 192.168.61.201
I0127 02:57:36.846317 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has current primary IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:36.846338 1119007 main.go:141] libmachine: (no-preload-887091) reserving static IP address...
I0127 02:57:36.846916 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "no-preload-887091", mac: "52:54:00:32:f8:ff", ip: "192.168.61.201"} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
I0127 02:57:36.846956 1119007 main.go:141] libmachine: (no-preload-887091) DBG | skip adding static IP to network mk-no-preload-887091 - found existing host DHCP lease matching {name: "no-preload-887091", mac: "52:54:00:32:f8:ff", ip: "192.168.61.201"}
I0127 02:57:36.846976 1119007 main.go:141] libmachine: (no-preload-887091) reserved static IP address 192.168.61.201 for domain no-preload-887091
I0127 02:57:36.846996 1119007 main.go:141] libmachine: (no-preload-887091) waiting for SSH...
I0127 02:57:36.847014 1119007 main.go:141] libmachine: (no-preload-887091) DBG | Getting to WaitForSSH function...
I0127 02:57:36.849363 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:36.849731 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
I0127 02:57:36.849756 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:36.849913 1119007 main.go:141] libmachine: (no-preload-887091) DBG | Using SSH client type: external
I0127 02:57:36.849946 1119007 main.go:141] libmachine: (no-preload-887091) DBG | Using SSH private key: /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/no-preload-887091/id_rsa (-rw-------)
I0127 02:57:36.849967 1119007 main.go:141] libmachine: (no-preload-887091) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.201 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/no-preload-887091/id_rsa -p 22] /usr/bin/ssh <nil>}
I0127 02:57:36.849976 1119007 main.go:141] libmachine: (no-preload-887091) DBG | About to run SSH command:
I0127 02:57:36.849988 1119007 main.go:141] libmachine: (no-preload-887091) DBG | exit 0
I0127 02:57:36.973058 1119007 main.go:141] libmachine: (no-preload-887091) DBG | SSH cmd err, output: <nil>:
I0127 02:57:36.973490 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetConfigRaw
I0127 02:57:36.974142 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetIP
I0127 02:57:36.976736 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:36.977165 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
I0127 02:57:36.977220 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:36.977400 1119007 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/no-preload-887091/config.json ...
I0127 02:57:36.977631 1119007 machine.go:93] provisionDockerMachine start ...
I0127 02:57:36.977653 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
I0127 02:57:36.977876 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
I0127 02:57:36.980076 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:36.980411 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
I0127 02:57:36.980430 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:36.980567 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
I0127 02:57:36.980763 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
I0127 02:57:36.980915 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
I0127 02:57:36.981066 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
I0127 02:57:36.981246 1119007 main.go:141] libmachine: Using SSH client type: native
I0127 02:57:36.981441 1119007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.61.201 22 <nil> <nil>}
I0127 02:57:36.981452 1119007 main.go:141] libmachine: About to run SSH command:
hostname
I0127 02:57:37.081339 1119007 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0127 02:57:37.081377 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetMachineName
I0127 02:57:37.081658 1119007 buildroot.go:166] provisioning hostname "no-preload-887091"
I0127 02:57:37.081691 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetMachineName
I0127 02:57:37.081924 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
I0127 02:57:37.084380 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:37.084725 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
I0127 02:57:37.084753 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:37.084895 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
I0127 02:57:37.085106 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
I0127 02:57:37.085263 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
I0127 02:57:37.085403 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
I0127 02:57:37.085626 1119007 main.go:141] libmachine: Using SSH client type: native
I0127 02:57:37.085814 1119007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.61.201 22 <nil> <nil>}
I0127 02:57:37.085825 1119007 main.go:141] libmachine: About to run SSH command:
sudo hostname no-preload-887091 && echo "no-preload-887091" | sudo tee /etc/hostname
I0127 02:57:37.195993 1119007 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-887091
I0127 02:57:37.196030 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
I0127 02:57:37.198721 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:37.199061 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
I0127 02:57:37.199091 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:37.199222 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
I0127 02:57:37.199398 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
I0127 02:57:37.199587 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
I0127 02:57:37.199679 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
I0127 02:57:37.199831 1119007 main.go:141] libmachine: Using SSH client type: native
I0127 02:57:37.200021 1119007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.61.201 22 <nil> <nil>}
I0127 02:57:37.200043 1119007 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sno-preload-887091' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-887091/g' /etc/hosts;
else
echo '127.0.1.1 no-preload-887091' | sudo tee -a /etc/hosts;
fi
fi
I0127 02:57:37.306176 1119007 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0127 02:57:37.306207 1119007 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20316-1057178/.minikube CaCertPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20316-1057178/.minikube}
I0127 02:57:37.306252 1119007 buildroot.go:174] setting up certificates
I0127 02:57:37.306267 1119007 provision.go:84] configureAuth start
I0127 02:57:37.306281 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetMachineName
I0127 02:57:37.306596 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetIP
I0127 02:57:37.309489 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:37.309825 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
I0127 02:57:37.309865 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:37.310024 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
I0127 02:57:37.311941 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:37.312264 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
I0127 02:57:37.312297 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:37.312369 1119007 provision.go:143] copyHostCerts
I0127 02:57:37.312444 1119007 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-1057178/.minikube/key.pem, removing ...
I0127 02:57:37.312469 1119007 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-1057178/.minikube/key.pem
I0127 02:57:37.312550 1119007 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20316-1057178/.minikube/key.pem (1675 bytes)
I0127 02:57:37.312677 1119007 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.pem, removing ...
I0127 02:57:37.312711 1119007 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.pem
I0127 02:57:37.312762 1119007 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.pem (1078 bytes)
I0127 02:57:37.312855 1119007 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-1057178/.minikube/cert.pem, removing ...
I0127 02:57:37.312864 1119007 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-1057178/.minikube/cert.pem
I0127 02:57:37.312913 1119007 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20316-1057178/.minikube/cert.pem (1123 bytes)
I0127 02:57:37.313023 1119007 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca-key.pem org=jenkins.no-preload-887091 san=[127.0.0.1 192.168.61.201 localhost minikube no-preload-887091]
I0127 02:57:37.408897 1119007 provision.go:177] copyRemoteCerts
I0127 02:57:37.409030 1119007 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0127 02:57:37.409075 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
I0127 02:57:37.411966 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:37.412302 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
I0127 02:57:37.412330 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:37.412523 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
I0127 02:57:37.412707 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
I0127 02:57:37.412851 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
I0127 02:57:37.412988 1119007 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/no-preload-887091/id_rsa Username:docker}
I0127 02:57:37.491461 1119007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0127 02:57:37.516316 1119007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I0127 02:57:37.541258 1119007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0127 02:57:37.565127 1119007 provision.go:87] duration metric: took 258.837723ms to configureAuth
I0127 02:57:37.565182 1119007 buildroot.go:189] setting minikube options for container-runtime
I0127 02:57:37.565398 1119007 config.go:182] Loaded profile config "no-preload-887091": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 02:57:37.565414 1119007 machine.go:96] duration metric: took 587.7693ms to provisionDockerMachine
I0127 02:57:37.565427 1119007 start.go:293] postStartSetup for "no-preload-887091" (driver="kvm2")
I0127 02:57:37.565455 1119007 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0127 02:57:37.565497 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
I0127 02:57:37.565851 1119007 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0127 02:57:37.565883 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
I0127 02:57:37.568521 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:37.568875 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
I0127 02:57:37.568905 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:37.569059 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
I0127 02:57:37.569248 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
I0127 02:57:37.569384 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
I0127 02:57:37.569520 1119007 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/no-preload-887091/id_rsa Username:docker}
I0127 02:57:37.652194 1119007 ssh_runner.go:195] Run: cat /etc/os-release
I0127 02:57:37.656807 1119007 info.go:137] Remote host: Buildroot 2023.02.9
I0127 02:57:37.656825 1119007 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-1057178/.minikube/addons for local assets ...
I0127 02:57:37.656879 1119007 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-1057178/.minikube/files for local assets ...
I0127 02:57:37.656966 1119007 filesync.go:149] local asset: /home/jenkins/minikube-integration/20316-1057178/.minikube/files/etc/ssl/certs/10644392.pem -> 10644392.pem in /etc/ssl/certs
I0127 02:57:37.657060 1119007 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0127 02:57:37.666921 1119007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/files/etc/ssl/certs/10644392.pem --> /etc/ssl/certs/10644392.pem (1708 bytes)
I0127 02:57:37.694813 1119007 start.go:296] duration metric: took 129.36665ms for postStartSetup
I0127 02:57:37.694863 1119007 fix.go:56] duration metric: took 19.761162878s for fixHost
I0127 02:57:37.694911 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
I0127 02:57:37.697378 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:37.697699 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
I0127 02:57:37.697728 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:37.697917 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
I0127 02:57:37.698109 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
I0127 02:57:37.698223 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
I0127 02:57:37.698342 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
I0127 02:57:37.698490 1119007 main.go:141] libmachine: Using SSH client type: native
I0127 02:57:37.698659 1119007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.61.201 22 <nil> <nil>}
I0127 02:57:37.698669 1119007 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0127 02:57:37.797890 1119007 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737946657.773139085
I0127 02:57:37.797917 1119007 fix.go:216] guest clock: 1737946657.773139085
I0127 02:57:37.797927 1119007 fix.go:229] Guest: 2025-01-27 02:57:37.773139085 +0000 UTC Remote: 2025-01-27 02:57:37.694887778 +0000 UTC m=+19.907510259 (delta=78.251307ms)
I0127 02:57:37.797955 1119007 fix.go:200] guest clock delta is within tolerance: 78.251307ms
I0127 02:57:37.797962 1119007 start.go:83] releasing machines lock for "no-preload-887091", held for 19.864277332s
I0127 02:57:37.797987 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
I0127 02:57:37.798292 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetIP
I0127 02:57:37.801179 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:37.801603 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
I0127 02:57:37.801655 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:37.801775 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
I0127 02:57:37.802406 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
I0127 02:57:37.802577 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
I0127 02:57:37.802685 1119007 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0127 02:57:37.802729 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
I0127 02:57:37.802779 1119007 ssh_runner.go:195] Run: cat /version.json
I0127 02:57:37.802806 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
I0127 02:57:37.805280 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:37.805651 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
I0127 02:57:37.805679 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:37.805707 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:37.805807 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
I0127 02:57:37.806008 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
I0127 02:57:37.806169 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
I0127 02:57:37.806218 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
I0127 02:57:37.806248 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:37.806312 1119007 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/no-preload-887091/id_rsa Username:docker}
I0127 02:57:37.806416 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
I0127 02:57:37.806569 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
I0127 02:57:37.806739 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
I0127 02:57:37.806908 1119007 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/no-preload-887091/id_rsa Username:docker}
I0127 02:57:37.905701 1119007 ssh_runner.go:195] Run: systemctl --version
I0127 02:57:37.912321 1119007 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0127 02:57:37.918374 1119007 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0127 02:57:37.918461 1119007 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0127 02:57:37.935436 1119007 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0127 02:57:37.935460 1119007 start.go:495] detecting cgroup driver to use...
I0127 02:57:37.935528 1119007 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0127 02:57:37.966093 1119007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0127 02:57:37.981853 1119007 docker.go:217] disabling cri-docker service (if available) ...
I0127 02:57:37.981927 1119007 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0127 02:57:37.996166 1119007 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0127 02:57:38.010386 1119007 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0127 02:57:38.147866 1119007 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0127 02:57:38.297821 1119007 docker.go:233] disabling docker service ...
I0127 02:57:38.297892 1119007 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0127 02:57:38.315550 1119007 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0127 02:57:38.330634 1119007 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0127 02:57:38.468074 1119007 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0127 02:57:38.586611 1119007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0127 02:57:38.601731 1119007 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0127 02:57:38.624294 1119007 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0127 02:57:38.635465 1119007 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0127 02:57:38.646317 1119007 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0127 02:57:38.646407 1119007 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0127 02:57:38.656764 1119007 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0127 02:57:38.667294 1119007 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0127 02:57:38.677687 1119007 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0127 02:57:38.688025 1119007 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0127 02:57:38.698919 1119007 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0127 02:57:38.709435 1119007 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0127 02:57:38.719630 1119007 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0127 02:57:38.730310 1119007 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0127 02:57:38.739553 1119007 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0127 02:57:38.739618 1119007 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0127 02:57:38.752608 1119007 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0127 02:57:38.762650 1119007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 02:57:38.877193 1119007 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0127 02:57:38.909183 1119007 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0127 02:57:38.909305 1119007 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0127 02:57:38.914225 1119007 retry.go:31] will retry after 794.922269ms: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0127 02:57:39.710334 1119007 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0127 02:57:39.716338 1119007 start.go:563] Will wait 60s for crictl version
I0127 02:57:39.716396 1119007 ssh_runner.go:195] Run: which crictl
I0127 02:57:39.720744 1119007 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0127 02:57:39.766069 1119007 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.7.23
RuntimeApiVersion: v1
I0127 02:57:39.766130 1119007 ssh_runner.go:195] Run: containerd --version
I0127 02:57:39.796216 1119007 ssh_runner.go:195] Run: containerd --version
I0127 02:57:39.823008 1119007 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
I0127 02:57:39.824419 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetIP
I0127 02:57:39.827434 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:39.827849 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
I0127 02:57:39.827880 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 02:57:39.828134 1119007 ssh_runner.go:195] Run: grep 192.168.61.1 host.minikube.internal$ /etc/hosts
I0127 02:57:39.832991 1119007 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0127 02:57:39.851687 1119007 kubeadm.go:883] updating cluster {Name:no-preload-887091 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-887091 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.201 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0127 02:57:39.851862 1119007 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0127 02:57:39.851922 1119007 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 02:57:39.888199 1119007 containerd.go:627] all images are preloaded for containerd runtime.
I0127 02:57:39.888237 1119007 cache_images.go:84] Images are preloaded, skipping loading
I0127 02:57:39.888246 1119007 kubeadm.go:934] updating node { 192.168.61.201 8443 v1.32.1 containerd true true} ...
I0127 02:57:39.888357 1119007 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-887091 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.201
[Install]
config:
{KubernetesVersion:v1.32.1 ClusterName:no-preload-887091 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0127 02:57:39.888413 1119007 ssh_runner.go:195] Run: sudo crictl info
I0127 02:57:39.925368 1119007 cni.go:84] Creating CNI manager for ""
I0127 02:57:39.925404 1119007 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 02:57:39.925417 1119007 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0127 02:57:39.925447 1119007 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.201 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-887091 NodeName:no-preload-887091 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.201"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.201 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0127 02:57:39.925650 1119007 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.61.201
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "no-preload-887091"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.61.201"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.61.201"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0127 02:57:39.925742 1119007 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
I0127 02:57:39.942833 1119007 binaries.go:44] Found k8s binaries, skipping transfer
I0127 02:57:39.942902 1119007 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0127 02:57:39.953967 1119007 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
I0127 02:57:39.975996 1119007 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0127 02:57:39.998062 1119007 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2313 bytes)
I0127 02:57:40.018697 1119007 ssh_runner.go:195] Run: grep 192.168.61.201 control-plane.minikube.internal$ /etc/hosts
I0127 02:57:40.022738 1119007 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.201 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0127 02:57:40.037382 1119007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 02:57:40.145744 1119007 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0127 02:57:40.164872 1119007 certs.go:68] Setting up /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/no-preload-887091 for IP: 192.168.61.201
I0127 02:57:40.164902 1119007 certs.go:194] generating shared ca certs ...
I0127 02:57:40.164925 1119007 certs.go:226] acquiring lock for ca certs: {Name:mk567acc23cbe907605c03a2ec03c8e4859e8343 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 02:57:40.165163 1119007 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.key
I0127 02:57:40.165232 1119007 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/proxy-client-ca.key
I0127 02:57:40.165247 1119007 certs.go:256] generating profile certs ...
I0127 02:57:40.165476 1119007 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/no-preload-887091/client.key
I0127 02:57:40.165563 1119007 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/no-preload-887091/apiserver.key.aacd82e8
I0127 02:57:40.165631 1119007 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/no-preload-887091/proxy-client.key
I0127 02:57:40.165784 1119007 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/1064439.pem (1338 bytes)
W0127 02:57:40.165824 1119007 certs.go:480] ignoring /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/1064439_empty.pem, impossibly tiny 0 bytes
I0127 02:57:40.165835 1119007 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca-key.pem (1679 bytes)
I0127 02:57:40.165856 1119007 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem (1078 bytes)
I0127 02:57:40.165879 1119007 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/cert.pem (1123 bytes)
I0127 02:57:40.165900 1119007 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/key.pem (1675 bytes)
I0127 02:57:40.165947 1119007 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/files/etc/ssl/certs/10644392.pem (1708 bytes)
I0127 02:57:40.166801 1119007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0127 02:57:40.205043 1119007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0127 02:57:40.233653 1119007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0127 02:57:40.263194 1119007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0127 02:57:40.300032 1119007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/no-preload-887091/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I0127 02:57:40.328591 1119007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/no-preload-887091/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0127 02:57:40.365362 1119007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/no-preload-887091/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0127 02:57:40.394991 1119007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/no-preload-887091/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0127 02:57:40.426137 1119007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/1064439.pem --> /usr/share/ca-certificates/1064439.pem (1338 bytes)
I0127 02:57:40.453968 1119007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/files/etc/ssl/certs/10644392.pem --> /usr/share/ca-certificates/10644392.pem (1708 bytes)
I0127 02:57:40.478752 1119007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0127 02:57:40.503851 1119007 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0127 02:57:40.523274 1119007 ssh_runner.go:195] Run: openssl version
I0127 02:57:40.529744 1119007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1064439.pem && ln -fs /usr/share/ca-certificates/1064439.pem /etc/ssl/certs/1064439.pem"
I0127 02:57:40.543427 1119007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1064439.pem
I0127 02:57:40.548863 1119007 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 01:51 /usr/share/ca-certificates/1064439.pem
I0127 02:57:40.548932 1119007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1064439.pem
I0127 02:57:40.555890 1119007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1064439.pem /etc/ssl/certs/51391683.0"
I0127 02:57:40.567770 1119007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10644392.pem && ln -fs /usr/share/ca-certificates/10644392.pem /etc/ssl/certs/10644392.pem"
I0127 02:57:40.579663 1119007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10644392.pem
I0127 02:57:40.584502 1119007 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 01:51 /usr/share/ca-certificates/10644392.pem
I0127 02:57:40.584560 1119007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10644392.pem
I0127 02:57:40.590675 1119007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10644392.pem /etc/ssl/certs/3ec20f2e.0"
I0127 02:57:40.602765 1119007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0127 02:57:40.614990 1119007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0127 02:57:40.620008 1119007 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 01:43 /usr/share/ca-certificates/minikubeCA.pem
I0127 02:57:40.620066 1119007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0127 02:57:40.626331 1119007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0127 02:57:40.638159 1119007 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0127 02:57:40.642982 1119007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0127 02:57:40.649025 1119007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0127 02:57:40.655003 1119007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0127 02:57:40.661855 1119007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0127 02:57:40.668260 1119007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0127 02:57:40.674724 1119007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0127 02:57:40.681057 1119007 kubeadm.go:392] StartCluster: {Name:no-preload-887091 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-887091 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.201 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0127 02:57:40.681181 1119007 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0127 02:57:40.681238 1119007 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0127 02:57:40.730453 1119007 cri.go:89] found id: "90588c3ce0b8a9dbaf560bcb262f23194dd933c68b927fd7c5e20be35de69201"
I0127 02:57:40.730481 1119007 cri.go:89] found id: "3e5949f2b39e4ab9da5b70635fa1895d21df41ff4db6ea86abdd193c100aa8a7"
I0127 02:57:40.730486 1119007 cri.go:89] found id: "7cdb346acd2e0edae08bea0014e70359ae7bb2671a69f291e9b91c63f040e324"
I0127 02:57:40.730497 1119007 cri.go:89] found id: "9eddd5efbbd2d70ea5381943c743568f31bbfcffc12b83bec443d7dd34d43c9a"
I0127 02:57:40.730500 1119007 cri.go:89] found id: "4d92a1eac3fbc99ef3fc12923dd53feb32ddaeee2f883fba7011662b2b3f3810"
I0127 02:57:40.730505 1119007 cri.go:89] found id: "0763fefc30ad1b620c709c9e4ed03bf2898f401c29b68422d3513ffeb849aa91"
I0127 02:57:40.730509 1119007 cri.go:89] found id: "e2050524cabd927c990edac7abbdb29dba13769b611193b5df791f54e67e0b9d"
I0127 02:57:40.730513 1119007 cri.go:89] found id: "c5ecfdbe22b7f95ac852a8c856e9c6e0cd678ffb3188180ee85d6af384e9a017"
I0127 02:57:40.730517 1119007 cri.go:89] found id: ""
I0127 02:57:40.730584 1119007 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
W0127 02:57:40.746631 1119007 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-01-27T02:57:40Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I0127 02:57:40.746770 1119007 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0127 02:57:40.757045 1119007 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0127 02:57:40.757070 1119007 kubeadm.go:593] restartPrimaryControlPlane start ...
I0127 02:57:40.757118 1119007 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0127 02:57:40.767762 1119007 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0127 02:57:40.768602 1119007 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-887091" does not appear in /home/jenkins/minikube-integration/20316-1057178/kubeconfig
I0127 02:57:40.769144 1119007 kubeconfig.go:62] /home/jenkins/minikube-integration/20316-1057178/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-887091" cluster setting kubeconfig missing "no-preload-887091" context setting]
I0127 02:57:40.769852 1119007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-1057178/kubeconfig: {Name:mke4bd9fc891569e5d6830fdf173fa5043f6c0a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 02:57:40.771437 1119007 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0127 02:57:40.784688 1119007 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.201
I0127 02:57:40.784725 1119007 kubeadm.go:1160] stopping kube-system containers ...
I0127 02:57:40.784740 1119007 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I0127 02:57:40.784842 1119007 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0127 02:57:40.826025 1119007 cri.go:89] found id: "90588c3ce0b8a9dbaf560bcb262f23194dd933c68b927fd7c5e20be35de69201"
I0127 02:57:40.826050 1119007 cri.go:89] found id: "3e5949f2b39e4ab9da5b70635fa1895d21df41ff4db6ea86abdd193c100aa8a7"
I0127 02:57:40.826055 1119007 cri.go:89] found id: "7cdb346acd2e0edae08bea0014e70359ae7bb2671a69f291e9b91c63f040e324"
I0127 02:57:40.826077 1119007 cri.go:89] found id: "9eddd5efbbd2d70ea5381943c743568f31bbfcffc12b83bec443d7dd34d43c9a"
I0127 02:57:40.826082 1119007 cri.go:89] found id: "4d92a1eac3fbc99ef3fc12923dd53feb32ddaeee2f883fba7011662b2b3f3810"
I0127 02:57:40.826087 1119007 cri.go:89] found id: "0763fefc30ad1b620c709c9e4ed03bf2898f401c29b68422d3513ffeb849aa91"
I0127 02:57:40.826091 1119007 cri.go:89] found id: "e2050524cabd927c990edac7abbdb29dba13769b611193b5df791f54e67e0b9d"
I0127 02:57:40.826096 1119007 cri.go:89] found id: "c5ecfdbe22b7f95ac852a8c856e9c6e0cd678ffb3188180ee85d6af384e9a017"
I0127 02:57:40.826100 1119007 cri.go:89] found id: ""
I0127 02:57:40.826107 1119007 cri.go:252] Stopping containers: [90588c3ce0b8a9dbaf560bcb262f23194dd933c68b927fd7c5e20be35de69201 3e5949f2b39e4ab9da5b70635fa1895d21df41ff4db6ea86abdd193c100aa8a7 7cdb346acd2e0edae08bea0014e70359ae7bb2671a69f291e9b91c63f040e324 9eddd5efbbd2d70ea5381943c743568f31bbfcffc12b83bec443d7dd34d43c9a 4d92a1eac3fbc99ef3fc12923dd53feb32ddaeee2f883fba7011662b2b3f3810 0763fefc30ad1b620c709c9e4ed03bf2898f401c29b68422d3513ffeb849aa91 e2050524cabd927c990edac7abbdb29dba13769b611193b5df791f54e67e0b9d c5ecfdbe22b7f95ac852a8c856e9c6e0cd678ffb3188180ee85d6af384e9a017]
I0127 02:57:40.826175 1119007 ssh_runner.go:195] Run: which crictl
I0127 02:57:40.830410 1119007 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 90588c3ce0b8a9dbaf560bcb262f23194dd933c68b927fd7c5e20be35de69201 3e5949f2b39e4ab9da5b70635fa1895d21df41ff4db6ea86abdd193c100aa8a7 7cdb346acd2e0edae08bea0014e70359ae7bb2671a69f291e9b91c63f040e324 9eddd5efbbd2d70ea5381943c743568f31bbfcffc12b83bec443d7dd34d43c9a 4d92a1eac3fbc99ef3fc12923dd53feb32ddaeee2f883fba7011662b2b3f3810 0763fefc30ad1b620c709c9e4ed03bf2898f401c29b68422d3513ffeb849aa91 e2050524cabd927c990edac7abbdb29dba13769b611193b5df791f54e67e0b9d c5ecfdbe22b7f95ac852a8c856e9c6e0cd678ffb3188180ee85d6af384e9a017
I0127 02:57:40.882866 1119007 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0127 02:57:40.899075 1119007 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0127 02:57:40.910270 1119007 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0127 02:57:40.910298 1119007 kubeadm.go:157] found existing configuration files:
I0127 02:57:40.910362 1119007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0127 02:57:40.919483 1119007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0127 02:57:40.919535 1119007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0127 02:57:40.928981 1119007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0127 02:57:40.938758 1119007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0127 02:57:40.938833 1119007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0127 02:57:40.952460 1119007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0127 02:57:40.962955 1119007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0127 02:57:40.963025 1119007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0127 02:57:40.973872 1119007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0127 02:57:40.983205 1119007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0127 02:57:40.983280 1119007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0127 02:57:40.993991 1119007 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0127 02:57:41.004968 1119007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0127 02:57:41.152772 1119007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0127 02:57:42.177753 1119007 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.024931478s)
I0127 02:57:42.177800 1119007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0127 02:57:42.417533 1119007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0127 02:57:42.511014 1119007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0127 02:57:42.594177 1119007 api_server.go:52] waiting for apiserver process to appear ...
I0127 02:57:42.594282 1119007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 02:57:43.095370 1119007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 02:57:43.594987 1119007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 02:57:44.095250 1119007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 02:57:44.113039 1119007 api_server.go:72] duration metric: took 1.518862074s to wait for apiserver process to appear ...
I0127 02:57:44.113072 1119007 api_server.go:88] waiting for apiserver healthz status ...
I0127 02:57:44.113103 1119007 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
I0127 02:57:46.518925 1119007 api_server.go:279] https://192.168.61.201:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0127 02:57:46.518959 1119007 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0127 02:57:46.518979 1119007 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
I0127 02:57:46.540719 1119007 api_server.go:279] https://192.168.61.201:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0127 02:57:46.540755 1119007 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0127 02:57:46.614126 1119007 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
I0127 02:57:46.628902 1119007 api_server.go:279] https://192.168.61.201:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[-]poststarthook/start-apiextensions-controllers failed: reason withheld
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[-]poststarthook/apiservice-discovery-controller failed: reason withheld
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0127 02:57:46.628971 1119007 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[-]poststarthook/start-apiextensions-controllers failed: reason withheld
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[-]poststarthook/apiservice-discovery-controller failed: reason withheld
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0127 02:57:47.113363 1119007 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
I0127 02:57:47.125469 1119007 api_server.go:279] https://192.168.61.201:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0127 02:57:47.125511 1119007 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0127 02:57:47.613179 1119007 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
I0127 02:57:47.618904 1119007 api_server.go:279] https://192.168.61.201:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0127 02:57:47.618939 1119007 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0127 02:57:48.113537 1119007 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
I0127 02:57:48.118110 1119007 api_server.go:279] https://192.168.61.201:8443/healthz returned 200:
ok
I0127 02:57:48.125708 1119007 api_server.go:141] control plane version: v1.32.1
I0127 02:57:48.125745 1119007 api_server.go:131] duration metric: took 4.012658353s to wait for apiserver health ...
I0127 02:57:48.125759 1119007 cni.go:84] Creating CNI manager for ""
I0127 02:57:48.125768 1119007 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 02:57:48.127566 1119007 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0127 02:57:48.128782 1119007 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0127 02:57:48.140489 1119007 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0127 02:57:48.162813 1119007 system_pods.go:43] waiting for kube-system pods to appear ...
I0127 02:57:48.176319 1119007 system_pods.go:59] 8 kube-system pods found
I0127 02:57:48.176370 1119007 system_pods.go:61] "coredns-668d6bf9bc-qkz5q" [f8f92df8-ef36-49b9-bb22-a88ab7906ac5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0127 02:57:48.176383 1119007 system_pods.go:61] "etcd-no-preload-887091" [be14b789-0033-4668-89a8-79a123455ba3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0127 02:57:48.176398 1119007 system_pods.go:61] "kube-apiserver-no-preload-887091" [cf42ffe7-87d3-4474-aff6-d86557db813d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0127 02:57:48.176412 1119007 system_pods.go:61] "kube-controller-manager-no-preload-887091" [d81a3345-0b6b-4650-9dba-0e4b0828728d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0127 02:57:48.176425 1119007 system_pods.go:61] "kube-proxy-rb9xh" [2dd0f353-2a59-4ee0-95d3-57bb062e90fd] Running
I0127 02:57:48.176438 1119007 system_pods.go:61] "kube-scheduler-no-preload-887091" [5a067209-1bbd-434c-b992-5ba08777bd64] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0127 02:57:48.176448 1119007 system_pods.go:61] "metrics-server-f79f97bbb-z5lnh" [73883cee-23b2-4bd3-bfa1-99fc13c10251] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0127 02:57:48.176457 1119007 system_pods.go:61] "storage-provisioner" [70aaa8f6-8792-4c89-9ef2-3a774e7ffc28] Running
I0127 02:57:48.176468 1119007 system_pods.go:74] duration metric: took 13.627705ms to wait for pod list to return data ...
I0127 02:57:48.176481 1119007 node_conditions.go:102] verifying NodePressure condition ...
I0127 02:57:48.181230 1119007 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0127 02:57:48.181258 1119007 node_conditions.go:123] node cpu capacity is 2
I0127 02:57:48.181270 1119007 node_conditions.go:105] duration metric: took 4.781166ms to run NodePressure ...
I0127 02:57:48.181287 1119007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0127 02:57:48.478593 1119007 kubeadm.go:724] waiting for restarted kubelet to initialise ...
I0127 02:57:48.484351 1119007 kubeadm.go:739] kubelet initialised
I0127 02:57:48.484381 1119007 kubeadm.go:740] duration metric: took 5.757501ms waiting for restarted kubelet to initialise ...
I0127 02:57:48.484394 1119007 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 02:57:48.490047 1119007 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-qkz5q" in "kube-system" namespace to be "Ready" ...
I0127 02:57:50.497200 1119007 pod_ready.go:103] pod "coredns-668d6bf9bc-qkz5q" in "kube-system" namespace has status "Ready":"False"
I0127 02:57:51.998991 1119007 pod_ready.go:93] pod "coredns-668d6bf9bc-qkz5q" in "kube-system" namespace has status "Ready":"True"
I0127 02:57:51.999023 1119007 pod_ready.go:82] duration metric: took 3.50894667s for pod "coredns-668d6bf9bc-qkz5q" in "kube-system" namespace to be "Ready" ...
I0127 02:57:51.999034 1119007 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-887091" in "kube-system" namespace to be "Ready" ...
I0127 02:57:54.007466 1119007 pod_ready.go:103] pod "etcd-no-preload-887091" in "kube-system" namespace has status "Ready":"False"
I0127 02:57:56.505790 1119007 pod_ready.go:103] pod "etcd-no-preload-887091" in "kube-system" namespace has status "Ready":"False"
I0127 02:57:58.508569 1119007 pod_ready.go:103] pod "etcd-no-preload-887091" in "kube-system" namespace has status "Ready":"False"
I0127 02:58:01.006571 1119007 pod_ready.go:93] pod "etcd-no-preload-887091" in "kube-system" namespace has status "Ready":"True"
I0127 02:58:01.006605 1119007 pod_ready.go:82] duration metric: took 9.007562594s for pod "etcd-no-preload-887091" in "kube-system" namespace to be "Ready" ...
I0127 02:58:01.006620 1119007 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-887091" in "kube-system" namespace to be "Ready" ...
I0127 02:58:01.013182 1119007 pod_ready.go:93] pod "kube-apiserver-no-preload-887091" in "kube-system" namespace has status "Ready":"True"
I0127 02:58:01.013223 1119007 pod_ready.go:82] duration metric: took 6.590337ms for pod "kube-apiserver-no-preload-887091" in "kube-system" namespace to be "Ready" ...
I0127 02:58:01.013238 1119007 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-887091" in "kube-system" namespace to be "Ready" ...
I0127 02:58:01.018376 1119007 pod_ready.go:93] pod "kube-controller-manager-no-preload-887091" in "kube-system" namespace has status "Ready":"True"
I0127 02:58:01.018403 1119007 pod_ready.go:82] duration metric: took 5.157185ms for pod "kube-controller-manager-no-preload-887091" in "kube-system" namespace to be "Ready" ...
I0127 02:58:01.018418 1119007 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rb9xh" in "kube-system" namespace to be "Ready" ...
I0127 02:58:01.023448 1119007 pod_ready.go:93] pod "kube-proxy-rb9xh" in "kube-system" namespace has status "Ready":"True"
I0127 02:58:01.023473 1119007 pod_ready.go:82] duration metric: took 5.046305ms for pod "kube-proxy-rb9xh" in "kube-system" namespace to be "Ready" ...
I0127 02:58:01.023486 1119007 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-887091" in "kube-system" namespace to be "Ready" ...
I0127 02:58:01.028930 1119007 pod_ready.go:93] pod "kube-scheduler-no-preload-887091" in "kube-system" namespace has status "Ready":"True"
I0127 02:58:01.028971 1119007 pod_ready.go:82] duration metric: took 5.475315ms for pod "kube-scheduler-no-preload-887091" in "kube-system" namespace to be "Ready" ...
I0127 02:58:01.028989 1119007 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace to be "Ready" ...
I0127 02:58:03.036089 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:58:05.536328 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:58:07.536727 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:58:10.036861 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:58:12.535610 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:58:15.035999 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:58:17.037187 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:58:19.038847 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:58:21.536348 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:58:24.036301 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:58:26.040195 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:58:28.040739 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:58:30.537378 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:58:33.035642 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:58:35.037249 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:58:37.536224 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:58:40.037082 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:58:42.038813 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:58:44.535198 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:58:46.535680 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:58:48.536326 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:58:50.537376 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:58:53.035927 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:58:55.536437 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:58:57.537110 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:59:00.038394 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:59:02.536771 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:59:05.038185 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:59:07.536177 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:59:09.537029 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:59:11.537757 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:59:14.037470 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:59:16.536465 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:59:19.037156 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:59:21.536456 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:59:23.536645 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:59:26.035836 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:59:28.279411 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:59:30.536761 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:59:32.537456 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:59:35.039986 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:59:37.536688 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:59:40.037732 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:59:42.537928 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:59:45.037622 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:59:47.535790 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:59:49.536337 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:59:52.037459 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:59:54.540462 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:59:56.543579 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 02:59:59.036536 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:00:01.535350 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:00:03.536165 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:00:06.037041 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:00:08.535898 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:00:10.536209 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:00:13.036599 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:00:15.536079 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:00:17.536247 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:00:20.036743 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:00:22.037936 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:00:24.536008 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:00:26.536423 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:00:28.536964 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:00:31.036263 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:00:33.040054 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:00:35.536750 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:00:37.537599 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:00:40.037026 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:00:42.535068 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:00:44.535800 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:00:46.536400 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:00:48.536806 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:00:50.536883 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:00:53.036700 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:00:55.536261 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:00:57.538026 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:01:00.037107 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:01:02.536760 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:01:05.036562 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:01:07.037686 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:01:09.536381 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:01:12.036975 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:01:14.037371 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:01:16.038039 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:01:18.536740 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:01:21.034869 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:01:23.035750 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:01:25.536208 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:01:28.046605 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:01:30.538073 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:01:33.036281 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:01:35.038144 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:01:37.538369 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:01:40.038370 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:01:42.537020 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:01:45.037268 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:01:47.037856 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:01:49.537112 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:01:52.036723 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:01:54.536260 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:01:57.037759 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:01:59.553947 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:02:01.029438 1119007 pod_ready.go:82] duration metric: took 4m0.000430308s for pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace to be "Ready" ...
E0127 03:02:01.029463 1119007 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace to be "Ready" (will not retry!)
I0127 03:02:01.029492 1119007 pod_ready.go:39] duration metric: took 4m12.545085543s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 03:02:01.029521 1119007 kubeadm.go:597] duration metric: took 4m20.2724454s to restartPrimaryControlPlane
W0127 03:02:01.029578 1119007 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
! Unable to restart control-plane node(s), will reset cluster: <no value>
I0127 03:02:01.029603 1119007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0127 03:02:03.004910 1119007 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.9752757s)
I0127 03:02:03.005026 1119007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0127 03:02:03.022327 1119007 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0127 03:02:03.033433 1119007 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0127 03:02:03.043716 1119007 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0127 03:02:03.043751 1119007 kubeadm.go:157] found existing configuration files:
I0127 03:02:03.043807 1119007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0127 03:02:03.053848 1119007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0127 03:02:03.053913 1119007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0127 03:02:03.064618 1119007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0127 03:02:03.075259 1119007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0127 03:02:03.075327 1119007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0127 03:02:03.087088 1119007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0127 03:02:03.098909 1119007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0127 03:02:03.098975 1119007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0127 03:02:03.110053 1119007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0127 03:02:03.119864 1119007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0127 03:02:03.119938 1119007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0127 03:02:03.130987 1119007 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0127 03:02:03.185348 1119007 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
I0127 03:02:03.185417 1119007 kubeadm.go:310] [preflight] Running pre-flight checks
I0127 03:02:03.314698 1119007 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0127 03:02:03.314881 1119007 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0127 03:02:03.315043 1119007 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0127 03:02:03.324401 1119007 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0127 03:02:03.326164 1119007 out.go:235] - Generating certificates and keys ...
I0127 03:02:03.326268 1119007 kubeadm.go:310] [certs] Using existing ca certificate authority
I0127 03:02:03.326359 1119007 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0127 03:02:03.326477 1119007 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0127 03:02:03.326572 1119007 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
I0127 03:02:03.326663 1119007 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
I0127 03:02:03.326738 1119007 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
I0127 03:02:03.326859 1119007 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
I0127 03:02:03.327073 1119007 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
I0127 03:02:03.327208 1119007 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0127 03:02:03.327338 1119007 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0127 03:02:03.327408 1119007 kubeadm.go:310] [certs] Using the existing "sa" key
I0127 03:02:03.327502 1119007 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0127 03:02:03.521123 1119007 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0127 03:02:03.756848 1119007 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0127 03:02:03.911089 1119007 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0127 03:02:04.122010 1119007 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0127 03:02:04.383085 1119007 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0127 03:02:04.383614 1119007 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0127 03:02:04.386205 1119007 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0127 03:02:04.388044 1119007 out.go:235] - Booting up control plane ...
I0127 03:02:04.388157 1119007 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0127 03:02:04.388265 1119007 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0127 03:02:04.388373 1119007 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0127 03:02:04.409379 1119007 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0127 03:02:04.416389 1119007 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0127 03:02:04.416479 1119007 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0127 03:02:04.571487 1119007 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0127 03:02:04.571690 1119007 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0127 03:02:05.072916 1119007 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.574288ms
I0127 03:02:05.073090 1119007 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0127 03:02:10.574512 1119007 kubeadm.go:310] [api-check] The API server is healthy after 5.501444049s
I0127 03:02:10.590265 1119007 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0127 03:02:10.612200 1119007 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0127 03:02:10.650305 1119007 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0127 03:02:10.650585 1119007 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-887091 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0127 03:02:10.661688 1119007 kubeadm.go:310] [bootstrap-token] Using token: 25alvo.7xrmg7nh4q5v903n
I0127 03:02:10.663119 1119007 out.go:235] - Configuring RBAC rules ...
I0127 03:02:10.663280 1119007 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0127 03:02:10.671888 1119007 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0127 03:02:10.685310 1119007 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0127 03:02:10.690214 1119007 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0127 03:02:10.694363 1119007 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0127 03:02:10.698959 1119007 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0127 03:02:10.982964 1119007 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0127 03:02:11.430752 1119007 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0127 03:02:11.982446 1119007 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0127 03:02:11.984681 1119007 kubeadm.go:310]
I0127 03:02:11.984836 1119007 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0127 03:02:11.984859 1119007 kubeadm.go:310]
I0127 03:02:11.984989 1119007 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0127 03:02:11.985010 1119007 kubeadm.go:310]
I0127 03:02:11.985048 1119007 kubeadm.go:310] mkdir -p $HOME/.kube
I0127 03:02:11.985139 1119007 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0127 03:02:11.985214 1119007 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0127 03:02:11.985223 1119007 kubeadm.go:310]
I0127 03:02:11.985308 1119007 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0127 03:02:11.985320 1119007 kubeadm.go:310]
I0127 03:02:11.985386 1119007 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0127 03:02:11.985394 1119007 kubeadm.go:310]
I0127 03:02:11.985466 1119007 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0127 03:02:11.985573 1119007 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0127 03:02:11.985666 1119007 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0127 03:02:11.985676 1119007 kubeadm.go:310]
I0127 03:02:11.985787 1119007 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0127 03:02:11.985893 1119007 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0127 03:02:11.985903 1119007 kubeadm.go:310]
I0127 03:02:11.986015 1119007 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 25alvo.7xrmg7nh4q5v903n \
I0127 03:02:11.986154 1119007 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:0bba8d4e4b3afb129d2d18e2e045cd48b3419c300ae73ce15b73c31a6c21b1ba \
I0127 03:02:11.986187 1119007 kubeadm.go:310] --control-plane
I0127 03:02:11.986194 1119007 kubeadm.go:310]
I0127 03:02:11.986302 1119007 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0127 03:02:11.986313 1119007 kubeadm.go:310]
I0127 03:02:11.986421 1119007 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 25alvo.7xrmg7nh4q5v903n \
I0127 03:02:11.986559 1119007 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:0bba8d4e4b3afb129d2d18e2e045cd48b3419c300ae73ce15b73c31a6c21b1ba
I0127 03:02:11.988046 1119007 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0127 03:02:11.988085 1119007 cni.go:84] Creating CNI manager for ""
I0127 03:02:11.988096 1119007 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 03:02:11.989984 1119007 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0127 03:02:11.991565 1119007 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0127 03:02:12.008152 1119007 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0127 03:02:12.031285 1119007 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0127 03:02:12.031368 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 03:02:12.031415 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-887091 minikube.k8s.io/updated_at=2025_01_27T03_02_12_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95 minikube.k8s.io/name=no-preload-887091 minikube.k8s.io/primary=true
I0127 03:02:12.301916 1119007 ops.go:34] apiserver oom_adj: -16
I0127 03:02:12.302079 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 03:02:12.802985 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 03:02:13.302566 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 03:02:13.802370 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 03:02:14.302582 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 03:02:14.802350 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 03:02:15.302355 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 03:02:15.802132 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 03:02:15.926758 1119007 kubeadm.go:1113] duration metric: took 3.895467932s to wait for elevateKubeSystemPrivileges
I0127 03:02:15.926808 1119007 kubeadm.go:394] duration metric: took 4m35.245756492s to StartCluster
I0127 03:02:15.926834 1119007 settings.go:142] acquiring lock: {Name:mkfac79776d8549aa482287d1af528efdec15d32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 03:02:15.926944 1119007 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20316-1057178/kubeconfig
I0127 03:02:15.928428 1119007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-1057178/kubeconfig: {Name:mke4bd9fc891569e5d6830fdf173fa5043f6c0a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 03:02:15.928677 1119007 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.201 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0127 03:02:15.928795 1119007 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0127 03:02:15.928913 1119007 config.go:182] Loaded profile config "no-preload-887091": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 03:02:15.928932 1119007 addons.go:69] Setting metrics-server=true in profile "no-preload-887091"
I0127 03:02:15.928966 1119007 addons.go:238] Setting addon metrics-server=true in "no-preload-887091"
I0127 03:02:15.928977 1119007 addons.go:69] Setting dashboard=true in profile "no-preload-887091"
W0127 03:02:15.928985 1119007 addons.go:247] addon metrics-server should already be in state true
I0127 03:02:15.928991 1119007 addons.go:238] Setting addon dashboard=true in "no-preload-887091"
I0127 03:02:15.928918 1119007 addons.go:69] Setting storage-provisioner=true in profile "no-preload-887091"
I0127 03:02:15.929020 1119007 host.go:66] Checking if "no-preload-887091" exists ...
I0127 03:02:15.929025 1119007 addons.go:238] Setting addon storage-provisioner=true in "no-preload-887091"
W0127 03:02:15.929036 1119007 addons.go:247] addon storage-provisioner should already be in state true
I0127 03:02:15.928961 1119007 addons.go:69] Setting default-storageclass=true in profile "no-preload-887091"
I0127 03:02:15.929073 1119007 host.go:66] Checking if "no-preload-887091" exists ...
I0127 03:02:15.929093 1119007 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-887091"
W0127 03:02:15.928999 1119007 addons.go:247] addon dashboard should already be in state true
I0127 03:02:15.929175 1119007 host.go:66] Checking if "no-preload-887091" exists ...
I0127 03:02:15.929496 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:15.929496 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:15.929544 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:15.929557 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:15.929547 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:15.929584 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:15.929499 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:15.929692 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:15.930306 1119007 out.go:177] * Verifying Kubernetes components...
I0127 03:02:15.931877 1119007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 03:02:15.952533 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34429
I0127 03:02:15.952549 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44595
I0127 03:02:15.952581 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46601
I0127 03:02:15.952721 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39019
I0127 03:02:15.954529 1119007 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:15.954547 1119007 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:15.954808 1119007 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:15.955205 1119007 main.go:141] libmachine: Using API Version 1
I0127 03:02:15.955229 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:15.955233 1119007 main.go:141] libmachine: Using API Version 1
I0127 03:02:15.955253 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:15.955313 1119007 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:15.955413 1119007 main.go:141] libmachine: Using API Version 1
I0127 03:02:15.955437 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:15.955766 1119007 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:15.955849 1119007 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:15.955886 1119007 main.go:141] libmachine: Using API Version 1
I0127 03:02:15.955947 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:15.956424 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:15.956463 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:15.956469 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:15.956507 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:15.956724 1119007 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:15.956927 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetState
I0127 03:02:15.957100 1119007 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:15.957708 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:15.957746 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:15.960884 1119007 addons.go:238] Setting addon default-storageclass=true in "no-preload-887091"
W0127 03:02:15.960910 1119007 addons.go:247] addon default-storageclass should already be in state true
I0127 03:02:15.960960 1119007 host.go:66] Checking if "no-preload-887091" exists ...
I0127 03:02:15.961323 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:15.961366 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:15.977560 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43979
I0127 03:02:15.978028 1119007 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:15.978173 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42633
I0127 03:02:15.978517 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46571
I0127 03:02:15.978693 1119007 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:15.978872 1119007 main.go:141] libmachine: Using API Version 1
I0127 03:02:15.978901 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:15.979226 1119007 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:15.979298 1119007 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:15.979562 1119007 main.go:141] libmachine: Using API Version 1
I0127 03:02:15.979576 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetState
I0127 03:02:15.979593 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:15.979923 1119007 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:15.980113 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetState
I0127 03:02:15.980289 1119007 main.go:141] libmachine: Using API Version 1
I0127 03:02:15.980304 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:15.980894 1119007 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:15.981251 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetState
I0127 03:02:15.981811 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
I0127 03:02:15.982385 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
I0127 03:02:15.983016 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
I0127 03:02:15.983162 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42601
I0127 03:02:15.983756 1119007 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:15.983837 1119007 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0127 03:02:15.984185 1119007 main.go:141] libmachine: Using API Version 1
I0127 03:02:15.984202 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:15.984606 1119007 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:15.985117 1119007 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0127 03:02:15.985204 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:15.985237 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:15.985253 1119007 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0127 03:02:15.985273 1119007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0127 03:02:15.985297 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
I0127 03:02:15.985367 1119007 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0127 03:02:15.986458 1119007 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0127 03:02:15.986480 1119007 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0127 03:02:15.986546 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
I0127 03:02:15.987599 1119007 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0127 03:02:15.988812 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 03:02:15.988933 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0127 03:02:15.989273 1119007 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0127 03:02:15.989471 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
I0127 03:02:15.989502 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 03:02:15.989571 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
I0127 03:02:15.989716 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
I0127 03:02:15.989884 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
I0127 03:02:15.990033 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
I0127 03:02:15.990172 1119007 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/no-preload-887091/id_rsa Username:docker}
I0127 03:02:15.990858 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 03:02:15.991445 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
I0127 03:02:15.991468 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 03:02:15.991628 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
I0127 03:02:15.991828 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
I0127 03:02:15.992248 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
I0127 03:02:15.992428 1119007 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/no-preload-887091/id_rsa Username:docker}
I0127 03:02:15.993703 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 03:02:15.994218 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
I0127 03:02:15.994244 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 03:02:15.994557 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
I0127 03:02:15.994742 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
I0127 03:02:15.994902 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
I0127 03:02:15.995042 1119007 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/no-preload-887091/id_rsa Username:docker}
I0127 03:02:16.004890 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45539
I0127 03:02:16.005324 1119007 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:16.005841 1119007 main.go:141] libmachine: Using API Version 1
I0127 03:02:16.005861 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:16.006249 1119007 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:16.006454 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetState
I0127 03:02:16.008475 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
I0127 03:02:16.008706 1119007 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0127 03:02:16.008719 1119007 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0127 03:02:16.008733 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
I0127 03:02:16.011722 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 03:02:16.012561 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
I0127 03:02:16.012637 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
I0127 03:02:16.012663 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 03:02:16.012777 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
I0127 03:02:16.012973 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
I0127 03:02:16.013155 1119007 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/no-preload-887091/id_rsa Username:docker}
I0127 03:02:16.171165 1119007 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0127 03:02:16.193562 1119007 node_ready.go:35] waiting up to 6m0s for node "no-preload-887091" to be "Ready" ...
I0127 03:02:16.246946 1119007 node_ready.go:49] node "no-preload-887091" has status "Ready":"True"
I0127 03:02:16.246978 1119007 node_ready.go:38] duration metric: took 53.383421ms for node "no-preload-887091" to be "Ready" ...
I0127 03:02:16.246992 1119007 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 03:02:16.274293 1119007 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-86j6q" in "kube-system" namespace to be "Ready" ...
I0127 03:02:16.274621 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0127 03:02:16.274647 1119007 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0127 03:02:16.305232 1119007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0127 03:02:16.327479 1119007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0127 03:02:16.328118 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0127 03:02:16.328136 1119007 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0127 03:02:16.428329 1119007 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0127 03:02:16.428364 1119007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0127 03:02:16.466201 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0127 03:02:16.466236 1119007 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0127 03:02:16.599271 1119007 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0127 03:02:16.599315 1119007 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0127 03:02:16.638608 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0127 03:02:16.638637 1119007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0127 03:02:16.828108 1119007 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0127 03:02:16.828150 1119007 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0127 03:02:16.838645 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0127 03:02:16.838676 1119007 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0127 03:02:16.984773 1119007 main.go:141] libmachine: Making call to close driver server
I0127 03:02:16.984808 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
I0127 03:02:16.985269 1119007 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:16.985286 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:16.985295 1119007 main.go:141] libmachine: Making call to close driver server
I0127 03:02:16.985302 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
I0127 03:02:16.985629 1119007 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:16.985649 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:17.004424 1119007 main.go:141] libmachine: Making call to close driver server
I0127 03:02:17.004447 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
I0127 03:02:17.004789 1119007 main.go:141] libmachine: (no-preload-887091) DBG | Closing plugin on server side
I0127 03:02:17.004799 1119007 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:17.004830 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:17.011294 1119007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0127 03:02:17.011605 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0127 03:02:17.011624 1119007 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0127 03:02:17.109457 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0127 03:02:17.109494 1119007 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0127 03:02:17.218037 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0127 03:02:17.218071 1119007 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0127 03:02:17.272264 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0127 03:02:17.272299 1119007 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0127 03:02:17.346698 1119007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0127 03:02:17.903867 1119007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.57633993s)
I0127 03:02:17.903940 1119007 main.go:141] libmachine: Making call to close driver server
I0127 03:02:17.903958 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
I0127 03:02:17.904299 1119007 main.go:141] libmachine: (no-preload-887091) DBG | Closing plugin on server side
I0127 03:02:17.904382 1119007 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:17.904399 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:17.904412 1119007 main.go:141] libmachine: Making call to close driver server
I0127 03:02:17.904418 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
I0127 03:02:17.904680 1119007 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:17.904702 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:17.904715 1119007 main.go:141] libmachine: (no-preload-887091) DBG | Closing plugin on server side
I0127 03:02:18.291876 1119007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.280535535s)
I0127 03:02:18.291939 1119007 main.go:141] libmachine: Making call to close driver server
I0127 03:02:18.291962 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
I0127 03:02:18.292296 1119007 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:18.292315 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:18.292323 1119007 main.go:141] libmachine: Making call to close driver server
I0127 03:02:18.292329 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
I0127 03:02:18.293045 1119007 main.go:141] libmachine: (no-preload-887091) DBG | Closing plugin on server side
I0127 03:02:18.293120 1119007 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:18.293147 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:18.293165 1119007 addons.go:479] Verifying addon metrics-server=true in "no-preload-887091"
I0127 03:02:18.308148 1119007 pod_ready.go:103] pod "coredns-668d6bf9bc-86j6q" in "kube-system" namespace has status "Ready":"False"
I0127 03:02:19.202588 1119007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.855830221s)
I0127 03:02:19.202668 1119007 main.go:141] libmachine: Making call to close driver server
I0127 03:02:19.202685 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
I0127 03:02:19.202996 1119007 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:19.203014 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:19.203031 1119007 main.go:141] libmachine: Making call to close driver server
I0127 03:02:19.203046 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
I0127 03:02:19.203365 1119007 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:19.203408 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:19.205207 1119007 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p no-preload-887091 addons enable metrics-server
I0127 03:02:19.206884 1119007 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
I0127 03:02:19.208319 1119007 addons.go:514] duration metric: took 3.279531879s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
I0127 03:02:20.784826 1119007 pod_ready.go:103] pod "coredns-668d6bf9bc-86j6q" in "kube-system" namespace has status "Ready":"False"
I0127 03:02:23.282142 1119007 pod_ready.go:103] pod "coredns-668d6bf9bc-86j6q" in "kube-system" namespace has status "Ready":"False"
I0127 03:02:25.286311 1119007 pod_ready.go:93] pod "coredns-668d6bf9bc-86j6q" in "kube-system" namespace has status "Ready":"True"
I0127 03:02:25.286348 1119007 pod_ready.go:82] duration metric: took 9.012019717s for pod "coredns-668d6bf9bc-86j6q" in "kube-system" namespace to be "Ready" ...
I0127 03:02:25.286363 1119007 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-fk8cw" in "kube-system" namespace to be "Ready" ...
I0127 03:02:25.296155 1119007 pod_ready.go:93] pod "coredns-668d6bf9bc-fk8cw" in "kube-system" namespace has status "Ready":"True"
I0127 03:02:25.296266 1119007 pod_ready.go:82] duration metric: took 9.891475ms for pod "coredns-668d6bf9bc-fk8cw" in "kube-system" namespace to be "Ready" ...
I0127 03:02:25.296304 1119007 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-887091" in "kube-system" namespace to be "Ready" ...
I0127 03:02:25.306424 1119007 pod_ready.go:93] pod "etcd-no-preload-887091" in "kube-system" namespace has status "Ready":"True"
I0127 03:02:25.306520 1119007 pod_ready.go:82] duration metric: took 10.178061ms for pod "etcd-no-preload-887091" in "kube-system" namespace to be "Ready" ...
I0127 03:02:25.306550 1119007 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-887091" in "kube-system" namespace to be "Ready" ...
I0127 03:02:25.316320 1119007 pod_ready.go:93] pod "kube-apiserver-no-preload-887091" in "kube-system" namespace has status "Ready":"True"
I0127 03:02:25.316353 1119007 pod_ready.go:82] duration metric: took 9.779811ms for pod "kube-apiserver-no-preload-887091" in "kube-system" namespace to be "Ready" ...
I0127 03:02:25.316368 1119007 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-887091" in "kube-system" namespace to be "Ready" ...
I0127 03:02:25.324972 1119007 pod_ready.go:93] pod "kube-controller-manager-no-preload-887091" in "kube-system" namespace has status "Ready":"True"
I0127 03:02:25.324998 1119007 pod_ready.go:82] duration metric: took 8.620263ms for pod "kube-controller-manager-no-preload-887091" in "kube-system" namespace to be "Ready" ...
I0127 03:02:25.325011 1119007 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-45pz6" in "kube-system" namespace to be "Ready" ...
I0127 03:02:25.682761 1119007 pod_ready.go:93] pod "kube-proxy-45pz6" in "kube-system" namespace has status "Ready":"True"
I0127 03:02:25.682792 1119007 pod_ready.go:82] duration metric: took 357.773408ms for pod "kube-proxy-45pz6" in "kube-system" namespace to be "Ready" ...
I0127 03:02:25.682807 1119007 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-887091" in "kube-system" namespace to be "Ready" ...
I0127 03:02:26.086323 1119007 pod_ready.go:93] pod "kube-scheduler-no-preload-887091" in "kube-system" namespace has status "Ready":"True"
I0127 03:02:26.086365 1119007 pod_ready.go:82] duration metric: took 403.548355ms for pod "kube-scheduler-no-preload-887091" in "kube-system" namespace to be "Ready" ...
I0127 03:02:26.086378 1119007 pod_ready.go:39] duration metric: took 9.839373235s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 03:02:26.086398 1119007 api_server.go:52] waiting for apiserver process to appear ...
I0127 03:02:26.086493 1119007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 03:02:26.115441 1119007 api_server.go:72] duration metric: took 10.186729821s to wait for apiserver process to appear ...
I0127 03:02:26.115474 1119007 api_server.go:88] waiting for apiserver healthz status ...
I0127 03:02:26.115503 1119007 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
I0127 03:02:26.125822 1119007 api_server.go:279] https://192.168.61.201:8443/healthz returned 200:
ok
I0127 03:02:26.127247 1119007 api_server.go:141] control plane version: v1.32.1
I0127 03:02:26.127277 1119007 api_server.go:131] duration metric: took 11.792506ms to wait for apiserver health ...
I0127 03:02:26.127289 1119007 system_pods.go:43] waiting for kube-system pods to appear ...
I0127 03:02:26.285021 1119007 system_pods.go:59] 9 kube-system pods found
I0127 03:02:26.285059 1119007 system_pods.go:61] "coredns-668d6bf9bc-86j6q" [9b85ae79-ae19-4cd1-a0da-0343c9e2801c] Running
I0127 03:02:26.285067 1119007 system_pods.go:61] "coredns-668d6bf9bc-fk8cw" [c7075b92-233d-4a5a-b864-ef349d7125e7] Running
I0127 03:02:26.285073 1119007 system_pods.go:61] "etcd-no-preload-887091" [45d4a5fc-797f-4d4a-9204-049ebcdc5647] Running
I0127 03:02:26.285079 1119007 system_pods.go:61] "kube-apiserver-no-preload-887091" [11e7ea14-678a-408f-a722-8fedb984c086] Running
I0127 03:02:26.285085 1119007 system_pods.go:61] "kube-controller-manager-no-preload-887091" [95d63381-33aa-428b-80b1-6e8ccf96b8a1] Running
I0127 03:02:26.285089 1119007 system_pods.go:61] "kube-proxy-45pz6" [b3aa986f-d6d8-4050-8760-438aabd39bdc] Running
I0127 03:02:26.285094 1119007 system_pods.go:61] "kube-scheduler-no-preload-887091" [5065d24f-256d-43ad-bd00-1d5868b7214d] Running
I0127 03:02:26.285104 1119007 system_pods.go:61] "metrics-server-f79f97bbb-vshg4" [33ae36ed-d8a4-4d60-bcd0-1becf2d490bc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0127 03:02:26.285110 1119007 system_pods.go:61] "storage-provisioner" [127a1f13-b70c-4482-bd8b-14a6bf24b663] Running
I0127 03:02:26.285121 1119007 system_pods.go:74] duration metric: took 157.824017ms to wait for pod list to return data ...
I0127 03:02:26.285134 1119007 default_sa.go:34] waiting for default service account to be created ...
I0127 03:02:26.480092 1119007 default_sa.go:45] found service account: "default"
I0127 03:02:26.480128 1119007 default_sa.go:55] duration metric: took 194.984911ms for default service account to be created ...
I0127 03:02:26.480141 1119007 system_pods.go:137] waiting for k8s-apps to be running ...
I0127 03:02:26.688727 1119007 system_pods.go:87] 9 kube-system pods found
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p no-preload-887091 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2 --container-runtime=containerd --kubernetes-version=v1.32.1": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-887091 -n no-preload-887091
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p no-preload-887091 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-887091 logs -n 25: (1.478230465s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/SecondStart logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
| addons | enable metrics-server -p old-k8s-version-760492 | old-k8s-version-760492 | jenkins | v1.35.0 | 27 Jan 25 02:56 UTC | 27 Jan 25 02:56 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p old-k8s-version-760492 | old-k8s-version-760492 | jenkins | v1.35.0 | 27 Jan 25 02:56 UTC | 27 Jan 25 02:57 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p no-preload-887091 | no-preload-887091 | jenkins | v1.35.0 | 27 Jan 25 02:57 UTC | 27 Jan 25 02:57 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p no-preload-887091 | no-preload-887091 | jenkins | v1.35.0 | 27 Jan 25 02:57 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --preload=false --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.1 | | | | | |
| addons | enable dashboard -p embed-certs-264552 | embed-certs-264552 | jenkins | v1.35.0 | 27 Jan 25 02:57 UTC | |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| addons | enable dashboard -p default-k8s-diff-port-717075 | default-k8s-diff-port-717075 | jenkins | v1.35.0 | 27 Jan 25 02:57 UTC | 27 Jan 25 02:57 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p embed-certs-264552 | embed-certs-264552 | jenkins | v1.35.0 | 27 Jan 25 02:57 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --embed-certs --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.1 | | | | | |
| start | -p | default-k8s-diff-port-717075 | jenkins | v1.35.0 | 27 Jan 25 02:57 UTC | |
| | default-k8s-diff-port-717075 | | | | | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --apiserver-port=8444 | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.1 | | | | | |
| addons | enable dashboard -p old-k8s-version-760492 | old-k8s-version-760492 | jenkins | v1.35.0 | 27 Jan 25 02:57 UTC | 27 Jan 25 02:57 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p old-k8s-version-760492 | old-k8s-version-760492 | jenkins | v1.35.0 | 27 Jan 25 02:57 UTC | 27 Jan 25 03:00 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| image | old-k8s-version-760492 image | old-k8s-version-760492 | jenkins | v1.35.0 | 27 Jan 25 03:01 UTC | 27 Jan 25 03:01 UTC |
| | list --format=json | | | | | |
| pause | -p old-k8s-version-760492 | old-k8s-version-760492 | jenkins | v1.35.0 | 27 Jan 25 03:01 UTC | 27 Jan 25 03:01 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p old-k8s-version-760492 | old-k8s-version-760492 | jenkins | v1.35.0 | 27 Jan 25 03:01 UTC | 27 Jan 25 03:01 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p old-k8s-version-760492 | old-k8s-version-760492 | jenkins | v1.35.0 | 27 Jan 25 03:01 UTC | 27 Jan 25 03:01 UTC |
| delete | -p old-k8s-version-760492 | old-k8s-version-760492 | jenkins | v1.35.0 | 27 Jan 25 03:01 UTC | 27 Jan 25 03:01 UTC |
| start | -p newest-cni-642127 --memory=2200 --alsologtostderr | newest-cni-642127 | jenkins | v1.35.0 | 27 Jan 25 03:01 UTC | 27 Jan 25 03:01 UTC |
| | --wait=apiserver,system_pods,default_sa | | | | | |
| | --network-plugin=cni | | | | | |
| | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 | | | | | |
| | --driver=kvm2 --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.1 | | | | | |
| addons | enable metrics-server -p newest-cni-642127 | newest-cni-642127 | jenkins | v1.35.0 | 27 Jan 25 03:01 UTC | 27 Jan 25 03:01 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p newest-cni-642127 | newest-cni-642127 | jenkins | v1.35.0 | 27 Jan 25 03:01 UTC | 27 Jan 25 03:01 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p newest-cni-642127 | newest-cni-642127 | jenkins | v1.35.0 | 27 Jan 25 03:02 UTC | 27 Jan 25 03:02 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p newest-cni-642127 --memory=2200 --alsologtostderr | newest-cni-642127 | jenkins | v1.35.0 | 27 Jan 25 03:02 UTC | 27 Jan 25 03:02 UTC |
| | --wait=apiserver,system_pods,default_sa | | | | | |
| | --network-plugin=cni | | | | | |
| | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 | | | | | |
| | --driver=kvm2 --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.1 | | | | | |
| image | newest-cni-642127 image list | newest-cni-642127 | jenkins | v1.35.0 | 27 Jan 25 03:02 UTC | 27 Jan 25 03:02 UTC |
| | --format=json | | | | | |
| pause | -p newest-cni-642127 | newest-cni-642127 | jenkins | v1.35.0 | 27 Jan 25 03:02 UTC | 27 Jan 25 03:02 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p newest-cni-642127 | newest-cni-642127 | jenkins | v1.35.0 | 27 Jan 25 03:02 UTC | 27 Jan 25 03:02 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p newest-cni-642127 | newest-cni-642127 | jenkins | v1.35.0 | 27 Jan 25 03:02 UTC | 27 Jan 25 03:02 UTC |
| delete | -p newest-cni-642127 | newest-cni-642127 | jenkins | v1.35.0 | 27 Jan 25 03:02 UTC | 27 Jan 25 03:02 UTC |
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2025/01/27 03:02:00
Running on machine: ubuntu-20-agent-14
Binary: Built with gc go1.23.4 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0127 03:02:00.237835 1121411 out.go:345] Setting OutFile to fd 1 ...
I0127 03:02:00.238128 1121411 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 03:02:00.238140 1121411 out.go:358] Setting ErrFile to fd 2...
I0127 03:02:00.238146 1121411 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 03:02:00.238345 1121411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-1057178/.minikube/bin
I0127 03:02:00.239045 1121411 out.go:352] Setting JSON to false
I0127 03:02:00.240327 1121411 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":13467,"bootTime":1737933453,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0127 03:02:00.240474 1121411 start.go:139] virtualization: kvm guest
I0127 03:02:00.242533 1121411 out.go:177] * [newest-cni-642127] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
I0127 03:02:00.244184 1121411 out.go:177] - MINIKUBE_LOCATION=20316
I0127 03:02:00.244247 1121411 notify.go:220] Checking for updates...
I0127 03:02:00.246478 1121411 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0127 03:02:00.247855 1121411 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20316-1057178/kubeconfig
I0127 03:02:00.249125 1121411 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-1057178/.minikube
I0127 03:02:00.250346 1121411 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0127 03:02:00.251585 1121411 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0127 03:02:00.253406 1121411 config.go:182] Loaded profile config "newest-cni-642127": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 03:02:00.254032 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:00.254107 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:00.270414 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33505
I0127 03:02:00.270862 1121411 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:00.271405 1121411 main.go:141] libmachine: Using API Version 1
I0127 03:02:00.271428 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:00.271776 1121411 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:00.271945 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
I0127 03:02:00.272173 1121411 driver.go:394] Setting default libvirt URI to qemu:///system
I0127 03:02:00.272461 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:00.272496 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:00.287317 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36083
I0127 03:02:00.287836 1121411 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:00.288298 1121411 main.go:141] libmachine: Using API Version 1
I0127 03:02:00.288340 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:00.288708 1121411 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:00.288885 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
I0127 03:02:00.325767 1121411 out.go:177] * Using the kvm2 driver based on existing profile
I0127 03:02:00.327047 1121411 start.go:297] selected driver: kvm2
I0127 03:02:00.327060 1121411 start.go:901] validating driver "kvm2" against &{Name:newest-cni-642127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-642127 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenA
ddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0127 03:02:00.327183 1121411 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0127 03:02:00.327982 1121411 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 03:02:00.328064 1121411 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20316-1057178/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0127 03:02:00.343178 1121411 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
I0127 03:02:00.343639 1121411 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
I0127 03:02:00.343677 1121411 cni.go:84] Creating CNI manager for ""
I0127 03:02:00.343730 1121411 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 03:02:00.343763 1121411 start.go:340] cluster config:
{Name:newest-cni-642127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-642127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0127 03:02:00.343883 1121411 iso.go:125] acquiring lock: {Name:mkd30bc9d11f9170e89ad95ce7ba25fa6d1e04f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 03:02:00.345590 1121411 out.go:177] * Starting "newest-cni-642127" primary control-plane node in "newest-cni-642127" cluster
I0127 03:02:00.346774 1121411 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0127 03:02:00.346814 1121411 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
I0127 03:02:00.346828 1121411 cache.go:56] Caching tarball of preloaded images
I0127 03:02:00.346908 1121411 preload.go:172] Found /home/jenkins/minikube-integration/20316-1057178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0127 03:02:00.346919 1121411 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
I0127 03:02:00.347008 1121411 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127/config.json ...
I0127 03:02:00.347215 1121411 start.go:360] acquireMachinesLock for newest-cni-642127: {Name:mka8dc154c517d64837d06e2f84f8bddd0b82c58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0127 03:02:00.347258 1121411 start.go:364] duration metric: took 23.213µs to acquireMachinesLock for "newest-cni-642127"
I0127 03:02:00.347273 1121411 start.go:96] Skipping create...Using existing machine configuration
I0127 03:02:00.347278 1121411 fix.go:54] fixHost starting:
I0127 03:02:00.347525 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:00.347569 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:00.362339 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37759
I0127 03:02:00.362837 1121411 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:00.363413 1121411 main.go:141] libmachine: Using API Version 1
I0127 03:02:00.363435 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:00.363738 1121411 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:00.363908 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
I0127 03:02:00.364065 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetState
I0127 03:02:00.365643 1121411 fix.go:112] recreateIfNeeded on newest-cni-642127: state=Stopped err=<nil>
I0127 03:02:00.365669 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
W0127 03:02:00.366076 1121411 fix.go:138] unexpected machine state, will restart: <nil>
I0127 03:02:00.368560 1121411 out.go:177] * Restarting existing kvm2 VM for "newest-cni-642127" ...
I0127 03:01:59.553947 1119007 pod_ready.go:103] pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace has status "Ready":"False"
I0127 03:02:01.029438 1119007 pod_ready.go:82] duration metric: took 4m0.000430308s for pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace to be "Ready" ...
E0127 03:02:01.029463 1119007 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-z5lnh" in "kube-system" namespace to be "Ready" (will not retry!)
I0127 03:02:01.029492 1119007 pod_ready.go:39] duration metric: took 4m12.545085543s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 03:02:01.029521 1119007 kubeadm.go:597] duration metric: took 4m20.2724454s to restartPrimaryControlPlane
W0127 03:02:01.029578 1119007 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
I0127 03:02:01.029603 1119007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0127 03:02:03.004910 1119007 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.9752757s)
I0127 03:02:03.005026 1119007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0127 03:02:03.022327 1119007 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0127 03:02:03.033433 1119007 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0127 03:02:03.043716 1119007 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0127 03:02:03.043751 1119007 kubeadm.go:157] found existing configuration files:
I0127 03:02:03.043807 1119007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0127 03:02:03.053848 1119007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0127 03:02:03.053913 1119007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0127 03:02:03.064618 1119007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0127 03:02:03.075259 1119007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0127 03:02:03.075327 1119007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0127 03:02:03.087088 1119007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0127 03:02:03.098909 1119007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0127 03:02:03.098975 1119007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0127 03:02:03.110053 1119007 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0127 03:02:03.119864 1119007 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0127 03:02:03.119938 1119007 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0127 03:02:03.130987 1119007 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0127 03:02:03.185348 1119007 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
I0127 03:02:03.185417 1119007 kubeadm.go:310] [preflight] Running pre-flight checks
I0127 03:02:03.314698 1119007 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0127 03:02:03.314881 1119007 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0127 03:02:03.315043 1119007 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0127 03:02:03.324401 1119007 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0127 03:02:03.326164 1119007 out.go:235] - Generating certificates and keys ...
I0127 03:02:03.326268 1119007 kubeadm.go:310] [certs] Using existing ca certificate authority
I0127 03:02:03.326359 1119007 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0127 03:02:03.326477 1119007 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0127 03:02:03.326572 1119007 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
I0127 03:02:03.326663 1119007 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
I0127 03:02:03.326738 1119007 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
I0127 03:02:03.326859 1119007 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
I0127 03:02:03.327073 1119007 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
I0127 03:02:03.327208 1119007 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0127 03:02:03.327338 1119007 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0127 03:02:03.327408 1119007 kubeadm.go:310] [certs] Using the existing "sa" key
I0127 03:02:03.327502 1119007 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0127 03:02:03.521123 1119007 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0127 03:02:03.756848 1119007 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0127 03:02:03.911089 1119007 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0127 03:02:04.122010 1119007 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0127 03:02:04.383085 1119007 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0127 03:02:04.383614 1119007 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0127 03:02:04.386205 1119007 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0127 03:02:00.791431 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
I0127 03:02:02.793532 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
I0127 03:02:00.101750 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
I0127 03:02:02.600452 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
I0127 03:02:00.369945 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Start
I0127 03:02:00.370121 1121411 main.go:141] libmachine: (newest-cni-642127) starting domain...
I0127 03:02:00.370143 1121411 main.go:141] libmachine: (newest-cni-642127) ensuring networks are active...
I0127 03:02:00.370872 1121411 main.go:141] libmachine: (newest-cni-642127) Ensuring network default is active
I0127 03:02:00.371180 1121411 main.go:141] libmachine: (newest-cni-642127) Ensuring network mk-newest-cni-642127 is active
I0127 03:02:00.371540 1121411 main.go:141] libmachine: (newest-cni-642127) getting domain XML...
I0127 03:02:00.372193 1121411 main.go:141] libmachine: (newest-cni-642127) creating domain...
I0127 03:02:01.655632 1121411 main.go:141] libmachine: (newest-cni-642127) waiting for IP...
I0127 03:02:01.656638 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:01.657157 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
I0127 03:02:01.657251 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:01.657139 1121446 retry.go:31] will retry after 277.784658ms: waiting for domain to come up
I0127 03:02:01.936660 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:01.937240 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
I0127 03:02:01.937271 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:01.937207 1121446 retry.go:31] will retry after 238.163617ms: waiting for domain to come up
I0127 03:02:02.176792 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:02.177474 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
I0127 03:02:02.177544 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:02.177436 1121446 retry.go:31] will retry after 380.939356ms: waiting for domain to come up
I0127 03:02:02.560097 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:02.560666 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
I0127 03:02:02.560700 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:02.560618 1121446 retry.go:31] will retry after 505.552982ms: waiting for domain to come up
I0127 03:02:03.067443 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:03.067968 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
I0127 03:02:03.068040 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:03.067965 1121446 retry.go:31] will retry after 727.427105ms: waiting for domain to come up
I0127 03:02:03.797031 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:03.797596 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
I0127 03:02:03.797621 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:03.797562 1121446 retry.go:31] will retry after 647.611718ms: waiting for domain to come up
I0127 03:02:04.447043 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:04.447523 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
I0127 03:02:04.447556 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:04.447508 1121446 retry.go:31] will retry after 984.747883ms: waiting for domain to come up
I0127 03:02:04.388044 1119007 out.go:235] - Booting up control plane ...
I0127 03:02:04.388157 1119007 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0127 03:02:04.388265 1119007 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0127 03:02:04.388373 1119007 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0127 03:02:04.409379 1119007 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0127 03:02:04.416389 1119007 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0127 03:02:04.416479 1119007 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0127 03:02:04.571487 1119007 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0127 03:02:04.571690 1119007 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0127 03:02:05.072916 1119007 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.574288ms
I0127 03:02:05.073090 1119007 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0127 03:02:05.292102 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
I0127 03:02:07.292399 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
I0127 03:02:09.792796 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
I0127 03:02:05.099225 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
I0127 03:02:07.099594 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
I0127 03:02:09.600572 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
I0127 03:02:05.434383 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:05.434961 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
I0127 03:02:05.434994 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:05.434926 1121446 retry.go:31] will retry after 1.239188819s: waiting for domain to come up
I0127 03:02:06.675638 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:06.676209 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
I0127 03:02:06.676244 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:06.676172 1121446 retry.go:31] will retry after 1.489275436s: waiting for domain to come up
I0127 03:02:08.167884 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:08.168365 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
I0127 03:02:08.168402 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:08.168327 1121446 retry.go:31] will retry after 1.739982698s: waiting for domain to come up
I0127 03:02:09.910362 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:09.910871 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
I0127 03:02:09.910964 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:09.910871 1121446 retry.go:31] will retry after 2.79669233s: waiting for domain to come up
I0127 03:02:10.574512 1119007 kubeadm.go:310] [api-check] The API server is healthy after 5.501444049s
I0127 03:02:10.590265 1119007 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0127 03:02:10.612200 1119007 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0127 03:02:10.650305 1119007 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0127 03:02:10.650585 1119007 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-887091 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0127 03:02:10.661688 1119007 kubeadm.go:310] [bootstrap-token] Using token: 25alvo.7xrmg7nh4q5v903n
I0127 03:02:10.663119 1119007 out.go:235] - Configuring RBAC rules ...
I0127 03:02:10.663280 1119007 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0127 03:02:10.671888 1119007 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0127 03:02:10.685310 1119007 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0127 03:02:10.690214 1119007 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0127 03:02:10.694363 1119007 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0127 03:02:10.698959 1119007 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0127 03:02:10.982964 1119007 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0127 03:02:11.430752 1119007 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0127 03:02:11.982446 1119007 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0127 03:02:11.984681 1119007 kubeadm.go:310]
I0127 03:02:11.984836 1119007 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0127 03:02:11.984859 1119007 kubeadm.go:310]
I0127 03:02:11.984989 1119007 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0127 03:02:11.985010 1119007 kubeadm.go:310]
I0127 03:02:11.985048 1119007 kubeadm.go:310] mkdir -p $HOME/.kube
I0127 03:02:11.985139 1119007 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0127 03:02:11.985214 1119007 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0127 03:02:11.985223 1119007 kubeadm.go:310]
I0127 03:02:11.985308 1119007 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0127 03:02:11.985320 1119007 kubeadm.go:310]
I0127 03:02:11.985386 1119007 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0127 03:02:11.985394 1119007 kubeadm.go:310]
I0127 03:02:11.985466 1119007 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0127 03:02:11.985573 1119007 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0127 03:02:11.985666 1119007 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0127 03:02:11.985676 1119007 kubeadm.go:310]
I0127 03:02:11.985787 1119007 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0127 03:02:11.985893 1119007 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0127 03:02:11.985903 1119007 kubeadm.go:310]
I0127 03:02:11.986015 1119007 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 25alvo.7xrmg7nh4q5v903n \
I0127 03:02:11.986154 1119007 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:0bba8d4e4b3afb129d2d18e2e045cd48b3419c300ae73ce15b73c31a6c21b1ba \
I0127 03:02:11.986187 1119007 kubeadm.go:310] --control-plane
I0127 03:02:11.986194 1119007 kubeadm.go:310]
I0127 03:02:11.986302 1119007 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0127 03:02:11.986313 1119007 kubeadm.go:310]
I0127 03:02:11.986421 1119007 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 25alvo.7xrmg7nh4q5v903n \
I0127 03:02:11.986559 1119007 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:0bba8d4e4b3afb129d2d18e2e045cd48b3419c300ae73ce15b73c31a6c21b1ba
I0127 03:02:11.988046 1119007 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0127 03:02:11.988085 1119007 cni.go:84] Creating CNI manager for ""
I0127 03:02:11.988096 1119007 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 03:02:11.989984 1119007 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0127 03:02:11.991565 1119007 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0127 03:02:12.008152 1119007 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0127 03:02:12.031285 1119007 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0127 03:02:12.031368 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 03:02:12.031415 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-887091 minikube.k8s.io/updated_at=2025_01_27T03_02_12_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95 minikube.k8s.io/name=no-preload-887091 minikube.k8s.io/primary=true
I0127 03:02:12.301916 1119007 ops.go:34] apiserver oom_adj: -16
I0127 03:02:12.302079 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 03:02:12.802985 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 03:02:11.795142 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
I0127 03:02:14.292215 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
I0127 03:02:11.613207 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
I0127 03:02:14.098783 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
I0127 03:02:12.710060 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:12.710698 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
I0127 03:02:12.710737 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:12.710630 1121446 retry.go:31] will retry after 2.899766509s: waiting for domain to come up
I0127 03:02:13.302566 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 03:02:13.802370 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 03:02:14.302582 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 03:02:14.802350 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 03:02:15.302355 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 03:02:15.802132 1119007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 03:02:15.926758 1119007 kubeadm.go:1113] duration metric: took 3.895467932s to wait for elevateKubeSystemPrivileges
I0127 03:02:15.926808 1119007 kubeadm.go:394] duration metric: took 4m35.245756492s to StartCluster
I0127 03:02:15.926834 1119007 settings.go:142] acquiring lock: {Name:mkfac79776d8549aa482287d1af528efdec15d32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 03:02:15.926944 1119007 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20316-1057178/kubeconfig
I0127 03:02:15.928428 1119007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-1057178/kubeconfig: {Name:mke4bd9fc891569e5d6830fdf173fa5043f6c0a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 03:02:15.928677 1119007 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.201 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0127 03:02:15.928795 1119007 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0127 03:02:15.928913 1119007 config.go:182] Loaded profile config "no-preload-887091": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 03:02:15.928932 1119007 addons.go:69] Setting metrics-server=true in profile "no-preload-887091"
I0127 03:02:15.928966 1119007 addons.go:238] Setting addon metrics-server=true in "no-preload-887091"
I0127 03:02:15.928977 1119007 addons.go:69] Setting dashboard=true in profile "no-preload-887091"
W0127 03:02:15.928985 1119007 addons.go:247] addon metrics-server should already be in state true
I0127 03:02:15.928991 1119007 addons.go:238] Setting addon dashboard=true in "no-preload-887091"
I0127 03:02:15.928918 1119007 addons.go:69] Setting storage-provisioner=true in profile "no-preload-887091"
I0127 03:02:15.929020 1119007 host.go:66] Checking if "no-preload-887091" exists ...
I0127 03:02:15.929025 1119007 addons.go:238] Setting addon storage-provisioner=true in "no-preload-887091"
W0127 03:02:15.929036 1119007 addons.go:247] addon storage-provisioner should already be in state true
I0127 03:02:15.928961 1119007 addons.go:69] Setting default-storageclass=true in profile "no-preload-887091"
I0127 03:02:15.929073 1119007 host.go:66] Checking if "no-preload-887091" exists ...
I0127 03:02:15.929093 1119007 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-887091"
W0127 03:02:15.928999 1119007 addons.go:247] addon dashboard should already be in state true
I0127 03:02:15.929175 1119007 host.go:66] Checking if "no-preload-887091" exists ...
I0127 03:02:15.929496 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:15.929496 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:15.929544 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:15.929557 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:15.929547 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:15.929584 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:15.929499 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:15.929692 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:15.930306 1119007 out.go:177] * Verifying Kubernetes components...
I0127 03:02:15.931877 1119007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 03:02:15.952533 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34429
I0127 03:02:15.952549 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44595
I0127 03:02:15.952581 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46601
I0127 03:02:15.952721 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39019
I0127 03:02:15.954529 1119007 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:15.954547 1119007 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:15.954808 1119007 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:15.955205 1119007 main.go:141] libmachine: Using API Version 1
I0127 03:02:15.955229 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:15.955233 1119007 main.go:141] libmachine: Using API Version 1
I0127 03:02:15.955253 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:15.955313 1119007 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:15.955413 1119007 main.go:141] libmachine: Using API Version 1
I0127 03:02:15.955437 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:15.955766 1119007 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:15.955849 1119007 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:15.955886 1119007 main.go:141] libmachine: Using API Version 1
I0127 03:02:15.955947 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:15.956424 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:15.956463 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:15.956469 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:15.956507 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:15.956724 1119007 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:15.956927 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetState
I0127 03:02:15.957100 1119007 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:15.957708 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:15.957746 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:15.960884 1119007 addons.go:238] Setting addon default-storageclass=true in "no-preload-887091"
W0127 03:02:15.960910 1119007 addons.go:247] addon default-storageclass should already be in state true
I0127 03:02:15.960960 1119007 host.go:66] Checking if "no-preload-887091" exists ...
I0127 03:02:15.961323 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:15.961366 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:15.977560 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43979
I0127 03:02:15.978028 1119007 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:15.978173 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42633
I0127 03:02:15.978517 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46571
I0127 03:02:15.978693 1119007 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:15.978872 1119007 main.go:141] libmachine: Using API Version 1
I0127 03:02:15.978901 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:15.979226 1119007 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:15.979298 1119007 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:15.979562 1119007 main.go:141] libmachine: Using API Version 1
I0127 03:02:15.979576 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetState
I0127 03:02:15.979593 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:15.979923 1119007 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:15.980113 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetState
I0127 03:02:15.980289 1119007 main.go:141] libmachine: Using API Version 1
I0127 03:02:15.980304 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:15.980894 1119007 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:15.981251 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetState
I0127 03:02:15.981811 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
I0127 03:02:15.982385 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
I0127 03:02:15.983016 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
I0127 03:02:15.983162 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42601
I0127 03:02:15.983756 1119007 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:15.983837 1119007 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0127 03:02:15.984185 1119007 main.go:141] libmachine: Using API Version 1
I0127 03:02:15.984202 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:15.984606 1119007 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:15.985117 1119007 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0127 03:02:15.985204 1119007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:15.985237 1119007 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:15.985253 1119007 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0127 03:02:15.985273 1119007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0127 03:02:15.985297 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
I0127 03:02:15.985367 1119007 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0127 03:02:15.986458 1119007 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0127 03:02:15.986480 1119007 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0127 03:02:15.986546 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
I0127 03:02:15.987599 1119007 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0127 03:02:15.988812 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 03:02:15.988933 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0127 03:02:15.989273 1119007 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0127 03:02:15.989471 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
I0127 03:02:15.989502 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 03:02:15.989571 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
I0127 03:02:15.989716 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
I0127 03:02:15.989884 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
I0127 03:02:15.990033 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
I0127 03:02:15.990172 1119007 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/no-preload-887091/id_rsa Username:docker}
I0127 03:02:15.990858 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 03:02:15.991445 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
I0127 03:02:15.991468 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 03:02:15.991628 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
I0127 03:02:15.991828 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
I0127 03:02:15.992248 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
I0127 03:02:15.992428 1119007 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/no-preload-887091/id_rsa Username:docker}
I0127 03:02:15.993703 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 03:02:15.994218 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
I0127 03:02:15.994244 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 03:02:15.994557 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
I0127 03:02:15.994742 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
I0127 03:02:15.994902 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
I0127 03:02:15.995042 1119007 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/no-preload-887091/id_rsa Username:docker}
I0127 03:02:16.004890 1119007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45539
I0127 03:02:16.005324 1119007 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:16.005841 1119007 main.go:141] libmachine: Using API Version 1
I0127 03:02:16.005861 1119007 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:16.006249 1119007 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:16.006454 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetState
I0127 03:02:16.008475 1119007 main.go:141] libmachine: (no-preload-887091) Calling .DriverName
I0127 03:02:16.008706 1119007 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0127 03:02:16.008719 1119007 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0127 03:02:16.008733 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHHostname
I0127 03:02:16.011722 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 03:02:16.012561 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHPort
I0127 03:02:16.012637 1119007 main.go:141] libmachine: (no-preload-887091) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:f8:ff", ip: ""} in network mk-no-preload-887091: {Iface:virbr3 ExpiryTime:2025-01-27 03:54:01 +0000 UTC Type:0 Mac:52:54:00:32:f8:ff Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:no-preload-887091 Clientid:01:52:54:00:32:f8:ff}
I0127 03:02:16.012663 1119007 main.go:141] libmachine: (no-preload-887091) DBG | domain no-preload-887091 has defined IP address 192.168.61.201 and MAC address 52:54:00:32:f8:ff in network mk-no-preload-887091
I0127 03:02:16.012777 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHKeyPath
I0127 03:02:16.012973 1119007 main.go:141] libmachine: (no-preload-887091) Calling .GetSSHUsername
I0127 03:02:16.013155 1119007 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/no-preload-887091/id_rsa Username:docker}
I0127 03:02:16.171165 1119007 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0127 03:02:16.193562 1119007 node_ready.go:35] waiting up to 6m0s for node "no-preload-887091" to be "Ready" ...
I0127 03:02:16.246946 1119007 node_ready.go:49] node "no-preload-887091" has status "Ready":"True"
I0127 03:02:16.246978 1119007 node_ready.go:38] duration metric: took 53.383421ms for node "no-preload-887091" to be "Ready" ...
I0127 03:02:16.246992 1119007 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 03:02:16.274293 1119007 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-86j6q" in "kube-system" namespace to be "Ready" ...
I0127 03:02:16.274621 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0127 03:02:16.274647 1119007 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0127 03:02:16.305232 1119007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0127 03:02:16.327479 1119007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0127 03:02:16.328118 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0127 03:02:16.328136 1119007 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0127 03:02:16.428329 1119007 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0127 03:02:16.428364 1119007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0127 03:02:16.466201 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0127 03:02:16.466236 1119007 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0127 03:02:16.599271 1119007 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0127 03:02:16.599315 1119007 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0127 03:02:16.638608 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0127 03:02:16.638637 1119007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0127 03:02:16.828108 1119007 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0127 03:02:16.828150 1119007 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0127 03:02:16.838645 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0127 03:02:16.838676 1119007 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0127 03:02:16.984773 1119007 main.go:141] libmachine: Making call to close driver server
I0127 03:02:16.984808 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
I0127 03:02:16.985269 1119007 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:16.985286 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:16.985295 1119007 main.go:141] libmachine: Making call to close driver server
I0127 03:02:16.985302 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
I0127 03:02:16.985629 1119007 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:16.985649 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:17.004424 1119007 main.go:141] libmachine: Making call to close driver server
I0127 03:02:17.004447 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
I0127 03:02:17.004789 1119007 main.go:141] libmachine: (no-preload-887091) DBG | Closing plugin on server side
I0127 03:02:17.004799 1119007 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:17.004830 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:17.011294 1119007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0127 03:02:17.011605 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0127 03:02:17.011624 1119007 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0127 03:02:17.109457 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0127 03:02:17.109494 1119007 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0127 03:02:17.218037 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0127 03:02:17.218071 1119007 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0127 03:02:17.272264 1119007 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0127 03:02:17.272299 1119007 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0127 03:02:17.346698 1119007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0127 03:02:17.903867 1119007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.57633993s)
I0127 03:02:17.903940 1119007 main.go:141] libmachine: Making call to close driver server
I0127 03:02:17.903958 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
I0127 03:02:17.904299 1119007 main.go:141] libmachine: (no-preload-887091) DBG | Closing plugin on server side
I0127 03:02:17.904382 1119007 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:17.904399 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:17.904412 1119007 main.go:141] libmachine: Making call to close driver server
I0127 03:02:17.904418 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
I0127 03:02:17.904680 1119007 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:17.904702 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:17.904715 1119007 main.go:141] libmachine: (no-preload-887091) DBG | Closing plugin on server side
I0127 03:02:18.291876 1119007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.280535535s)
I0127 03:02:18.291939 1119007 main.go:141] libmachine: Making call to close driver server
I0127 03:02:18.291962 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
I0127 03:02:18.292296 1119007 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:18.292315 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:18.292323 1119007 main.go:141] libmachine: Making call to close driver server
I0127 03:02:18.292329 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
I0127 03:02:18.293045 1119007 main.go:141] libmachine: (no-preload-887091) DBG | Closing plugin on server side
I0127 03:02:18.293120 1119007 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:18.293147 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:18.293165 1119007 addons.go:479] Verifying addon metrics-server=true in "no-preload-887091"
I0127 03:02:18.308148 1119007 pod_ready.go:103] pod "coredns-668d6bf9bc-86j6q" in "kube-system" namespace has status "Ready":"False"
I0127 03:02:19.202588 1119007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.855830221s)
I0127 03:02:19.202668 1119007 main.go:141] libmachine: Making call to close driver server
I0127 03:02:19.202685 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
I0127 03:02:19.202996 1119007 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:19.203014 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:19.203031 1119007 main.go:141] libmachine: Making call to close driver server
I0127 03:02:19.203046 1119007 main.go:141] libmachine: (no-preload-887091) Calling .Close
I0127 03:02:19.203365 1119007 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:19.203408 1119007 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:19.205207 1119007 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p no-preload-887091 addons enable metrics-server
I0127 03:02:19.206884 1119007 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
I0127 03:02:16.293451 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
I0127 03:02:18.793149 1119263 pod_ready.go:103] pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace has status "Ready":"False"
I0127 03:02:19.785753 1119263 pod_ready.go:82] duration metric: took 4m0.001003583s for pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace to be "Ready" ...
E0127 03:02:19.785781 1119263 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-wkg98" in "kube-system" namespace to be "Ready" (will not retry!)
I0127 03:02:19.785801 1119263 pod_ready.go:39] duration metric: took 4m12.565302655s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 03:02:19.785832 1119263 kubeadm.go:597] duration metric: took 4m20.078127881s to restartPrimaryControlPlane
W0127 03:02:19.785891 1119263 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
I0127 03:02:19.785918 1119263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0127 03:02:16.101837 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
I0127 03:02:18.600416 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
I0127 03:02:15.612007 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:15.612503 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | unable to find current IP address of domain newest-cni-642127 in network mk-newest-cni-642127
I0127 03:02:15.612532 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | I0127 03:02:15.612477 1121446 retry.go:31] will retry after 4.281984487s: waiting for domain to come up
I0127 03:02:19.898517 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:19.899156 1121411 main.go:141] libmachine: (newest-cni-642127) found domain IP: 192.168.50.51
I0127 03:02:19.899184 1121411 main.go:141] libmachine: (newest-cni-642127) reserving static IP address...
I0127 03:02:19.899199 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has current primary IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:19.899706 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "newest-cni-642127", mac: "52:54:00:b2:c0:f5", ip: "192.168.50.51"} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
I0127 03:02:19.899748 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | skip adding static IP to network mk-newest-cni-642127 - found existing host DHCP lease matching {name: "newest-cni-642127", mac: "52:54:00:b2:c0:f5", ip: "192.168.50.51"}
I0127 03:02:19.899765 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | Getting to WaitForSSH function...
I0127 03:02:19.899786 1121411 main.go:141] libmachine: (newest-cni-642127) reserved static IP address 192.168.50.51 for domain newest-cni-642127
I0127 03:02:19.899794 1121411 main.go:141] libmachine: (newest-cni-642127) waiting for SSH...
I0127 03:02:19.902680 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:19.903077 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
I0127 03:02:19.903108 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:19.903425 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | Using SSH client type: external
I0127 03:02:19.903455 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | Using SSH private key: /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa (-rw-------)
I0127 03:02:19.903497 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa -p 22] /usr/bin/ssh <nil>}
I0127 03:02:19.903528 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | About to run SSH command:
I0127 03:02:19.903545 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | exit 0
I0127 03:02:20.033236 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | SSH cmd err, output: <nil>:
I0127 03:02:20.033650 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetConfigRaw
I0127 03:02:20.034423 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetIP
I0127 03:02:20.037477 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:20.038000 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
I0127 03:02:20.038034 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:20.038292 1121411 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127/config.json ...
I0127 03:02:20.038569 1121411 machine.go:93] provisionDockerMachine start ...
I0127 03:02:20.038593 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
I0127 03:02:20.038817 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
I0127 03:02:20.041604 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:20.042029 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
I0127 03:02:20.042058 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:20.042374 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
I0127 03:02:20.042730 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
I0127 03:02:20.042972 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
I0127 03:02:20.043158 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
I0127 03:02:20.043362 1121411 main.go:141] libmachine: Using SSH client type: native
I0127 03:02:20.043631 1121411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.50.51 22 <nil> <nil>}
I0127 03:02:20.043646 1121411 main.go:141] libmachine: About to run SSH command:
hostname
I0127 03:02:20.162052 1121411 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0127 03:02:20.162088 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetMachineName
I0127 03:02:20.162389 1121411 buildroot.go:166] provisioning hostname "newest-cni-642127"
I0127 03:02:20.162416 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetMachineName
I0127 03:02:20.162603 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
I0127 03:02:20.166195 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:20.166703 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
I0127 03:02:20.166735 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:20.167015 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
I0127 03:02:20.167255 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
I0127 03:02:20.167440 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
I0127 03:02:20.167629 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
I0127 03:02:20.167847 1121411 main.go:141] libmachine: Using SSH client type: native
I0127 03:02:20.168082 1121411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.50.51 22 <nil> <nil>}
I0127 03:02:20.168098 1121411 main.go:141] libmachine: About to run SSH command:
sudo hostname newest-cni-642127 && echo "newest-cni-642127" | sudo tee /etc/hostname
I0127 03:02:19.208319 1119007 addons.go:514] duration metric: took 3.279531879s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
I0127 03:02:20.784826 1119007 pod_ready.go:103] pod "coredns-668d6bf9bc-86j6q" in "kube-system" namespace has status "Ready":"False"
I0127 03:02:20.304578 1121411 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-642127
I0127 03:02:20.304614 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
I0127 03:02:20.307961 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:20.308494 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
I0127 03:02:20.308576 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:20.308725 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
I0127 03:02:20.308929 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
I0127 03:02:20.309194 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
I0127 03:02:20.309354 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
I0127 03:02:20.309604 1121411 main.go:141] libmachine: Using SSH client type: native
I0127 03:02:20.309846 1121411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.50.51 22 <nil> <nil>}
I0127 03:02:20.309865 1121411 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\snewest-cni-642127' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-642127/g' /etc/hosts;
else
echo '127.0.1.1 newest-cni-642127' | sudo tee -a /etc/hosts;
fi
fi
I0127 03:02:20.431545 1121411 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0127 03:02:20.431586 1121411 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20316-1057178/.minikube CaCertPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20316-1057178/.minikube}
I0127 03:02:20.431617 1121411 buildroot.go:174] setting up certificates
I0127 03:02:20.431633 1121411 provision.go:84] configureAuth start
I0127 03:02:20.431649 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetMachineName
I0127 03:02:20.431999 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetIP
I0127 03:02:20.435425 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:20.435885 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
I0127 03:02:20.435918 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:20.436172 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
I0127 03:02:20.439389 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:20.439969 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
I0127 03:02:20.440002 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:20.440288 1121411 provision.go:143] copyHostCerts
I0127 03:02:20.440368 1121411 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.pem, removing ...
I0127 03:02:20.440392 1121411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.pem
I0127 03:02:20.440475 1121411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.pem (1078 bytes)
I0127 03:02:20.440610 1121411 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-1057178/.minikube/cert.pem, removing ...
I0127 03:02:20.440672 1121411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-1057178/.minikube/cert.pem
I0127 03:02:20.440724 1121411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20316-1057178/.minikube/cert.pem (1123 bytes)
I0127 03:02:20.440826 1121411 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-1057178/.minikube/key.pem, removing ...
I0127 03:02:20.440838 1121411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-1057178/.minikube/key.pem
I0127 03:02:20.440872 1121411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20316-1057178/.minikube/key.pem (1675 bytes)
I0127 03:02:20.441000 1121411 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca-key.pem org=jenkins.newest-cni-642127 san=[127.0.0.1 192.168.50.51 localhost minikube newest-cni-642127]
I0127 03:02:20.582957 1121411 provision.go:177] copyRemoteCerts
I0127 03:02:20.583042 1121411 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0127 03:02:20.583082 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
I0127 03:02:20.586468 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:20.586937 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
I0127 03:02:20.586967 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:20.587297 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
I0127 03:02:20.587493 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
I0127 03:02:20.587653 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
I0127 03:02:20.587816 1121411 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa Username:docker}
I0127 03:02:20.678286 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0127 03:02:20.710984 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I0127 03:02:20.743521 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0127 03:02:20.776342 1121411 provision.go:87] duration metric: took 344.690364ms to configureAuth
I0127 03:02:20.776390 1121411 buildroot.go:189] setting minikube options for container-runtime
I0127 03:02:20.776645 1121411 config.go:182] Loaded profile config "newest-cni-642127": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 03:02:20.776665 1121411 machine.go:96] duration metric: took 738.080097ms to provisionDockerMachine
I0127 03:02:20.776676 1121411 start.go:293] postStartSetup for "newest-cni-642127" (driver="kvm2")
I0127 03:02:20.776689 1121411 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0127 03:02:20.776728 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
I0127 03:02:20.777166 1121411 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0127 03:02:20.777201 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
I0127 03:02:20.781262 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:20.781754 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
I0127 03:02:20.781782 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:20.782169 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
I0127 03:02:20.782409 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
I0127 03:02:20.782633 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
I0127 03:02:20.782886 1121411 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa Username:docker}
I0127 03:02:20.877090 1121411 ssh_runner.go:195] Run: cat /etc/os-release
I0127 03:02:20.882893 1121411 info.go:137] Remote host: Buildroot 2023.02.9
I0127 03:02:20.882941 1121411 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-1057178/.minikube/addons for local assets ...
I0127 03:02:20.883012 1121411 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-1057178/.minikube/files for local assets ...
I0127 03:02:20.883121 1121411 filesync.go:149] local asset: /home/jenkins/minikube-integration/20316-1057178/.minikube/files/etc/ssl/certs/10644392.pem -> 10644392.pem in /etc/ssl/certs
I0127 03:02:20.883262 1121411 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0127 03:02:20.897501 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/files/etc/ssl/certs/10644392.pem --> /etc/ssl/certs/10644392.pem (1708 bytes)
I0127 03:02:20.927044 1121411 start.go:296] duration metric: took 150.330171ms for postStartSetup
I0127 03:02:20.927103 1121411 fix.go:56] duration metric: took 20.579822967s for fixHost
I0127 03:02:20.927133 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
I0127 03:02:20.930644 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:20.931093 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
I0127 03:02:20.931129 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:20.931414 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
I0127 03:02:20.931717 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
I0127 03:02:20.931919 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
I0127 03:02:20.932105 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
I0127 03:02:20.932280 1121411 main.go:141] libmachine: Using SSH client type: native
I0127 03:02:20.932530 1121411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.50.51 22 <nil> <nil>}
I0127 03:02:20.932545 1121411 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0127 03:02:21.046461 1121411 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737946941.010071066
I0127 03:02:21.046493 1121411 fix.go:216] guest clock: 1737946941.010071066
I0127 03:02:21.046504 1121411 fix.go:229] Guest: 2025-01-27 03:02:21.010071066 +0000 UTC Remote: 2025-01-27 03:02:20.927108919 +0000 UTC m=+20.729857739 (delta=82.962147ms)
I0127 03:02:21.046536 1121411 fix.go:200] guest clock delta is within tolerance: 82.962147ms
I0127 03:02:21.046543 1121411 start.go:83] releasing machines lock for "newest-cni-642127", held for 20.699275534s
I0127 03:02:21.046580 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
I0127 03:02:21.046929 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetIP
I0127 03:02:21.050101 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:21.050549 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
I0127 03:02:21.050572 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:21.050930 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
I0127 03:02:21.051682 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
I0127 03:02:21.051910 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
I0127 03:02:21.052040 1121411 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0127 03:02:21.052128 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
I0127 03:02:21.052184 1121411 ssh_runner.go:195] Run: cat /version.json
I0127 03:02:21.052219 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
I0127 03:02:21.055762 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:21.055836 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:21.056356 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
I0127 03:02:21.056389 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:21.056429 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
I0127 03:02:21.056447 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:21.056720 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
I0127 03:02:21.056899 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
I0127 03:02:21.056974 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
I0127 03:02:21.057099 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
I0127 03:02:21.057147 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
I0127 03:02:21.057303 1121411 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa Username:docker}
I0127 03:02:21.057708 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
I0127 03:02:21.057902 1121411 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa Username:docker}
I0127 03:02:21.169709 1121411 ssh_runner.go:195] Run: systemctl --version
I0127 03:02:21.177622 1121411 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0127 03:02:21.184029 1121411 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0127 03:02:21.184112 1121411 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0127 03:02:21.202861 1121411 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0127 03:02:21.202890 1121411 start.go:495] detecting cgroup driver to use...
I0127 03:02:21.202967 1121411 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0127 03:02:21.236110 1121411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0127 03:02:21.250683 1121411 docker.go:217] disabling cri-docker service (if available) ...
I0127 03:02:21.250796 1121411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0127 03:02:21.266354 1121411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0127 03:02:21.284146 1121411 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0127 03:02:21.436406 1121411 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0127 03:02:21.620560 1121411 docker.go:233] disabling docker service ...
I0127 03:02:21.620655 1121411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0127 03:02:21.639534 1121411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0127 03:02:21.657179 1121411 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0127 03:02:21.828676 1121411 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0127 03:02:21.993891 1121411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0127 03:02:22.011124 1121411 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0127 03:02:22.037734 1121411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0127 03:02:22.049863 1121411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0127 03:02:22.064327 1121411 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0127 03:02:22.064427 1121411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0127 03:02:22.080328 1121411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0127 03:02:22.093806 1121411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0127 03:02:22.106165 1121411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0127 03:02:22.117782 1121411 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0127 03:02:22.129650 1121411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0127 03:02:22.152872 1121411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0127 03:02:22.165020 1121411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0127 03:02:22.177918 1121411 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0127 03:02:22.188259 1121411 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0127 03:02:22.188355 1121411 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0127 03:02:22.204350 1121411 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0127 03:02:22.218093 1121411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 03:02:22.356619 1121411 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0127 03:02:22.385087 1121411 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0127 03:02:22.385172 1121411 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0127 03:02:22.389980 1121411 retry.go:31] will retry after 758.524819ms: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0127 03:02:23.148722 1121411 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0127 03:02:23.154533 1121411 start.go:563] Will wait 60s for crictl version
I0127 03:02:23.154611 1121411 ssh_runner.go:195] Run: which crictl
I0127 03:02:23.159040 1121411 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0127 03:02:23.200478 1121411 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.7.23
RuntimeApiVersion: v1
I0127 03:02:23.200579 1121411 ssh_runner.go:195] Run: containerd --version
I0127 03:02:23.228424 1121411 ssh_runner.go:195] Run: containerd --version
I0127 03:02:23.265392 1121411 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.23 ...
I0127 03:02:23.266856 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetIP
I0127 03:02:23.269741 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:23.270196 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
I0127 03:02:23.270231 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:23.270441 1121411 ssh_runner.go:195] Run: grep 192.168.50.1 host.minikube.internal$ /etc/hosts
I0127 03:02:23.275461 1121411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0127 03:02:23.294081 1121411 out.go:177] - kubeadm.pod-network-cidr=10.42.0.0/16
I0127 03:02:21.866190 1119263 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.080241643s)
I0127 03:02:21.866293 1119263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0127 03:02:21.886667 1119263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0127 03:02:21.901554 1119263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0127 03:02:21.915270 1119263 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0127 03:02:21.915296 1119263 kubeadm.go:157] found existing configuration files:
I0127 03:02:21.915369 1119263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0127 03:02:21.929169 1119263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0127 03:02:21.929294 1119263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0127 03:02:21.942913 1119263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0127 03:02:21.956444 1119263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0127 03:02:21.956522 1119263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0127 03:02:21.970342 1119263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0127 03:02:21.989145 1119263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0127 03:02:21.989232 1119263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0127 03:02:22.001913 1119263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0127 03:02:22.013198 1119263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0127 03:02:22.013270 1119263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0127 03:02:22.026131 1119263 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0127 03:02:22.226370 1119263 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0127 03:02:20.601947 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
I0127 03:02:22.605621 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
I0127 03:02:23.295574 1121411 kubeadm.go:883] updating cluster {Name:newest-cni-642127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-642127 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0127 03:02:23.295756 1121411 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0127 03:02:23.295841 1121411 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 03:02:23.331579 1121411 containerd.go:627] all images are preloaded for containerd runtime.
I0127 03:02:23.331604 1121411 containerd.go:534] Images already preloaded, skipping extraction
I0127 03:02:23.331661 1121411 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 03:02:23.368818 1121411 containerd.go:627] all images are preloaded for containerd runtime.
I0127 03:02:23.368848 1121411 cache_images.go:84] Images are preloaded, skipping loading
I0127 03:02:23.368856 1121411 kubeadm.go:934] updating node { 192.168.50.51 8443 v1.32.1 containerd true true} ...
I0127 03:02:23.369012 1121411 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-642127 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.51
[Install]
config:
{KubernetesVersion:v1.32.1 ClusterName:newest-cni-642127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0127 03:02:23.369101 1121411 ssh_runner.go:195] Run: sudo crictl info
I0127 03:02:23.405913 1121411 cni.go:84] Creating CNI manager for ""
I0127 03:02:23.405949 1121411 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 03:02:23.405966 1121411 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
I0127 03:02:23.406015 1121411 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.51 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-642127 NodeName:newest-cni-642127 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0127 03:02:23.406210 1121411 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.50.51
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "newest-cni-642127"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.50.51"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.50.51"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.1
networking:
dnsDomain: cluster.local
podSubnet: "10.42.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.42.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0127 03:02:23.406291 1121411 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
I0127 03:02:23.418253 1121411 binaries.go:44] Found k8s binaries, skipping transfer
I0127 03:02:23.418339 1121411 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0127 03:02:23.431397 1121411 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
I0127 03:02:23.452908 1121411 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0127 03:02:23.474059 1121411 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
I0127 03:02:23.494976 1121411 ssh_runner.go:195] Run: grep 192.168.50.51 control-plane.minikube.internal$ /etc/hosts
I0127 03:02:23.499246 1121411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.51 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0127 03:02:23.512541 1121411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 03:02:23.648564 1121411 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0127 03:02:23.667204 1121411 certs.go:68] Setting up /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127 for IP: 192.168.50.51
I0127 03:02:23.667230 1121411 certs.go:194] generating shared ca certs ...
I0127 03:02:23.667265 1121411 certs.go:226] acquiring lock for ca certs: {Name:mk567acc23cbe907605c03a2ec03c8e4859e8343 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 03:02:23.667447 1121411 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.key
I0127 03:02:23.667526 1121411 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/proxy-client-ca.key
I0127 03:02:23.667540 1121411 certs.go:256] generating profile certs ...
I0127 03:02:23.667681 1121411 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127/client.key
I0127 03:02:23.667777 1121411 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127/apiserver.key.fe27a200
I0127 03:02:23.667863 1121411 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127/proxy-client.key
I0127 03:02:23.668017 1121411 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/1064439.pem (1338 bytes)
W0127 03:02:23.668071 1121411 certs.go:480] ignoring /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/1064439_empty.pem, impossibly tiny 0 bytes
I0127 03:02:23.668085 1121411 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca-key.pem (1679 bytes)
I0127 03:02:23.668115 1121411 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/ca.pem (1078 bytes)
I0127 03:02:23.668143 1121411 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/cert.pem (1123 bytes)
I0127 03:02:23.668177 1121411 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/key.pem (1675 bytes)
I0127 03:02:23.668261 1121411 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-1057178/.minikube/files/etc/ssl/certs/10644392.pem (1708 bytes)
I0127 03:02:23.669195 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0127 03:02:23.715219 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0127 03:02:23.757555 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0127 03:02:23.797303 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0127 03:02:23.839764 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I0127 03:02:23.889721 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0127 03:02:23.923393 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0127 03:02:23.953947 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/profiles/newest-cni-642127/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0127 03:02:23.983760 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0127 03:02:24.016899 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/certs/1064439.pem --> /usr/share/ca-certificates/1064439.pem (1338 bytes)
I0127 03:02:24.060186 1121411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-1057178/.minikube/files/etc/ssl/certs/10644392.pem --> /usr/share/ca-certificates/10644392.pem (1708 bytes)
I0127 03:02:24.099215 1121411 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0127 03:02:24.120841 1121411 ssh_runner.go:195] Run: openssl version
I0127 03:02:24.127163 1121411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0127 03:02:24.139725 1121411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0127 03:02:24.144911 1121411 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 01:43 /usr/share/ca-certificates/minikubeCA.pem
I0127 03:02:24.145000 1121411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0127 03:02:24.153545 1121411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0127 03:02:24.167817 1121411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1064439.pem && ln -fs /usr/share/ca-certificates/1064439.pem /etc/ssl/certs/1064439.pem"
I0127 03:02:24.182019 1121411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1064439.pem
I0127 03:02:24.188811 1121411 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 01:51 /usr/share/ca-certificates/1064439.pem
I0127 03:02:24.188883 1121411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1064439.pem
I0127 03:02:24.196999 1121411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1064439.pem /etc/ssl/certs/51391683.0"
I0127 03:02:24.209518 1121411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10644392.pem && ln -fs /usr/share/ca-certificates/10644392.pem /etc/ssl/certs/10644392.pem"
I0127 03:02:24.221497 1121411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10644392.pem
I0127 03:02:24.226538 1121411 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 01:51 /usr/share/ca-certificates/10644392.pem
I0127 03:02:24.226618 1121411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10644392.pem
I0127 03:02:24.233572 1121411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10644392.pem /etc/ssl/certs/3ec20f2e.0"
I0127 03:02:24.245296 1121411 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0127 03:02:24.250242 1121411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0127 03:02:24.256818 1121411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0127 03:02:24.264939 1121411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0127 03:02:24.272818 1121411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0127 03:02:24.280734 1121411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0127 03:02:24.289169 1121411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0127 03:02:24.296827 1121411 kubeadm.go:392] StartCluster: {Name:newest-cni-642127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-642127 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: S
ubnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0127 03:02:24.297003 1121411 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0127 03:02:24.297095 1121411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0127 03:02:24.345692 1121411 cri.go:89] found id: "a6a8a2ba8bcb96c16e32fee70afcff2fb691cfba8dc6b1d17319c1af0fb57e5c"
I0127 03:02:24.345721 1121411 cri.go:89] found id: "2875317dc160c0cbfcb5f0fffa03054fa2f58e5ac3a8c285da1e902b27ff47ba"
I0127 03:02:24.345726 1121411 cri.go:89] found id: "1cf120eb9e1f79c0f94b2450693e0ddb3e2be97f570ea7c1bd076d78e161f63f"
I0127 03:02:24.345731 1121411 cri.go:89] found id: "7ed46ffcaf84e6803aa42840b6d1ad2e881baaab16a17a4ce2b4937e53de42cc"
I0127 03:02:24.345736 1121411 cri.go:89] found id: "f7635245edb3222b276ec6cb742d2e37ae2d21613eef3d959d8e42317a2e1c71"
I0127 03:02:24.345741 1121411 cri.go:89] found id: "6190f3df5366129319feab6d40d56f4b615cb6f059c4b8e91512bbd1b3943c19"
I0127 03:02:24.345745 1121411 cri.go:89] found id: "22958da5ca6d5bc9ed8ce5b964ecf90f4ffa68a09d4b9760a64cb0233948db0a"
I0127 03:02:24.345749 1121411 cri.go:89] found id: "6608f968238d3b18e99b5ae9b674c1a64c96b2d7b63769f917c4edc895804df3"
I0127 03:02:24.345753 1121411 cri.go:89] found id: ""
I0127 03:02:24.345806 1121411 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
W0127 03:02:24.363134 1121411 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-01-27T03:02:24Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I0127 03:02:24.363233 1121411 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0127 03:02:24.377414 1121411 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0127 03:02:24.377441 1121411 kubeadm.go:593] restartPrimaryControlPlane start ...
I0127 03:02:24.377512 1121411 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0127 03:02:24.391116 1121411 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0127 03:02:24.392658 1121411 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-642127" does not appear in /home/jenkins/minikube-integration/20316-1057178/kubeconfig
I0127 03:02:24.393662 1121411 kubeconfig.go:62] /home/jenkins/minikube-integration/20316-1057178/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-642127" cluster setting kubeconfig missing "newest-cni-642127" context setting]
I0127 03:02:24.395074 1121411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-1057178/kubeconfig: {Name:mke4bd9fc891569e5d6830fdf173fa5043f6c0a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 03:02:24.406122 1121411 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0127 03:02:24.417412 1121411 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.51
I0127 03:02:24.417457 1121411 kubeadm.go:1160] stopping kube-system containers ...
I0127 03:02:24.417475 1121411 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
I0127 03:02:24.417545 1121411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0127 03:02:24.459011 1121411 cri.go:89] found id: "a6a8a2ba8bcb96c16e32fee70afcff2fb691cfba8dc6b1d17319c1af0fb57e5c"
I0127 03:02:24.459043 1121411 cri.go:89] found id: "2875317dc160c0cbfcb5f0fffa03054fa2f58e5ac3a8c285da1e902b27ff47ba"
I0127 03:02:24.459049 1121411 cri.go:89] found id: "1cf120eb9e1f79c0f94b2450693e0ddb3e2be97f570ea7c1bd076d78e161f63f"
I0127 03:02:24.459055 1121411 cri.go:89] found id: "7ed46ffcaf84e6803aa42840b6d1ad2e881baaab16a17a4ce2b4937e53de42cc"
I0127 03:02:24.459059 1121411 cri.go:89] found id: "f7635245edb3222b276ec6cb742d2e37ae2d21613eef3d959d8e42317a2e1c71"
I0127 03:02:24.459065 1121411 cri.go:89] found id: "6190f3df5366129319feab6d40d56f4b615cb6f059c4b8e91512bbd1b3943c19"
I0127 03:02:24.459069 1121411 cri.go:89] found id: "22958da5ca6d5bc9ed8ce5b964ecf90f4ffa68a09d4b9760a64cb0233948db0a"
I0127 03:02:24.459074 1121411 cri.go:89] found id: "6608f968238d3b18e99b5ae9b674c1a64c96b2d7b63769f917c4edc895804df3"
I0127 03:02:24.459079 1121411 cri.go:89] found id: ""
I0127 03:02:24.459085 1121411 cri.go:252] Stopping containers: [a6a8a2ba8bcb96c16e32fee70afcff2fb691cfba8dc6b1d17319c1af0fb57e5c 2875317dc160c0cbfcb5f0fffa03054fa2f58e5ac3a8c285da1e902b27ff47ba 1cf120eb9e1f79c0f94b2450693e0ddb3e2be97f570ea7c1bd076d78e161f63f 7ed46ffcaf84e6803aa42840b6d1ad2e881baaab16a17a4ce2b4937e53de42cc f7635245edb3222b276ec6cb742d2e37ae2d21613eef3d959d8e42317a2e1c71 6190f3df5366129319feab6d40d56f4b615cb6f059c4b8e91512bbd1b3943c19 22958da5ca6d5bc9ed8ce5b964ecf90f4ffa68a09d4b9760a64cb0233948db0a 6608f968238d3b18e99b5ae9b674c1a64c96b2d7b63769f917c4edc895804df3]
I0127 03:02:24.459142 1121411 ssh_runner.go:195] Run: which crictl
I0127 03:02:24.463700 1121411 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 a6a8a2ba8bcb96c16e32fee70afcff2fb691cfba8dc6b1d17319c1af0fb57e5c 2875317dc160c0cbfcb5f0fffa03054fa2f58e5ac3a8c285da1e902b27ff47ba 1cf120eb9e1f79c0f94b2450693e0ddb3e2be97f570ea7c1bd076d78e161f63f 7ed46ffcaf84e6803aa42840b6d1ad2e881baaab16a17a4ce2b4937e53de42cc f7635245edb3222b276ec6cb742d2e37ae2d21613eef3d959d8e42317a2e1c71 6190f3df5366129319feab6d40d56f4b615cb6f059c4b8e91512bbd1b3943c19 22958da5ca6d5bc9ed8ce5b964ecf90f4ffa68a09d4b9760a64cb0233948db0a 6608f968238d3b18e99b5ae9b674c1a64c96b2d7b63769f917c4edc895804df3
I0127 03:02:24.514136 1121411 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0127 03:02:24.533173 1121411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0127 03:02:24.546127 1121411 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0127 03:02:24.546153 1121411 kubeadm.go:157] found existing configuration files:
I0127 03:02:24.546208 1121411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0127 03:02:24.557350 1121411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0127 03:02:24.557425 1121411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0127 03:02:24.568241 1121411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0127 03:02:24.579187 1121411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0127 03:02:24.579283 1121411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0127 03:02:24.590554 1121411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0127 03:02:24.603551 1121411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0127 03:02:24.603617 1121411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0127 03:02:24.617395 1121411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0127 03:02:24.630452 1121411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0127 03:02:24.630532 1121411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0127 03:02:24.642268 1121411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0127 03:02:24.652281 1121411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0127 03:02:24.829811 1121411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0127 03:02:23.282142 1119007 pod_ready.go:103] pod "coredns-668d6bf9bc-86j6q" in "kube-system" namespace has status "Ready":"False"
I0127 03:02:25.286311 1119007 pod_ready.go:93] pod "coredns-668d6bf9bc-86j6q" in "kube-system" namespace has status "Ready":"True"
I0127 03:02:25.286348 1119007 pod_ready.go:82] duration metric: took 9.012019717s for pod "coredns-668d6bf9bc-86j6q" in "kube-system" namespace to be "Ready" ...
I0127 03:02:25.286363 1119007 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-fk8cw" in "kube-system" namespace to be "Ready" ...
I0127 03:02:25.296155 1119007 pod_ready.go:93] pod "coredns-668d6bf9bc-fk8cw" in "kube-system" namespace has status "Ready":"True"
I0127 03:02:25.296266 1119007 pod_ready.go:82] duration metric: took 9.891475ms for pod "coredns-668d6bf9bc-fk8cw" in "kube-system" namespace to be "Ready" ...
I0127 03:02:25.296304 1119007 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-887091" in "kube-system" namespace to be "Ready" ...
I0127 03:02:25.306424 1119007 pod_ready.go:93] pod "etcd-no-preload-887091" in "kube-system" namespace has status "Ready":"True"
I0127 03:02:25.306520 1119007 pod_ready.go:82] duration metric: took 10.178061ms for pod "etcd-no-preload-887091" in "kube-system" namespace to be "Ready" ...
I0127 03:02:25.306550 1119007 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-887091" in "kube-system" namespace to be "Ready" ...
I0127 03:02:25.316320 1119007 pod_ready.go:93] pod "kube-apiserver-no-preload-887091" in "kube-system" namespace has status "Ready":"True"
I0127 03:02:25.316353 1119007 pod_ready.go:82] duration metric: took 9.779811ms for pod "kube-apiserver-no-preload-887091" in "kube-system" namespace to be "Ready" ...
I0127 03:02:25.316368 1119007 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-887091" in "kube-system" namespace to be "Ready" ...
I0127 03:02:25.324972 1119007 pod_ready.go:93] pod "kube-controller-manager-no-preload-887091" in "kube-system" namespace has status "Ready":"True"
I0127 03:02:25.324998 1119007 pod_ready.go:82] duration metric: took 8.620263ms for pod "kube-controller-manager-no-preload-887091" in "kube-system" namespace to be "Ready" ...
I0127 03:02:25.325011 1119007 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-45pz6" in "kube-system" namespace to be "Ready" ...
I0127 03:02:25.682761 1119007 pod_ready.go:93] pod "kube-proxy-45pz6" in "kube-system" namespace has status "Ready":"True"
I0127 03:02:25.682792 1119007 pod_ready.go:82] duration metric: took 357.773408ms for pod "kube-proxy-45pz6" in "kube-system" namespace to be "Ready" ...
I0127 03:02:25.682807 1119007 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-887091" in "kube-system" namespace to be "Ready" ...
I0127 03:02:26.086323 1119007 pod_ready.go:93] pod "kube-scheduler-no-preload-887091" in "kube-system" namespace has status "Ready":"True"
I0127 03:02:26.086365 1119007 pod_ready.go:82] duration metric: took 403.548355ms for pod "kube-scheduler-no-preload-887091" in "kube-system" namespace to be "Ready" ...
I0127 03:02:26.086378 1119007 pod_ready.go:39] duration metric: took 9.839373235s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 03:02:26.086398 1119007 api_server.go:52] waiting for apiserver process to appear ...
I0127 03:02:26.086493 1119007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 03:02:26.115441 1119007 api_server.go:72] duration metric: took 10.186729821s to wait for apiserver process to appear ...
I0127 03:02:26.115474 1119007 api_server.go:88] waiting for apiserver healthz status ...
I0127 03:02:26.115503 1119007 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
I0127 03:02:26.125822 1119007 api_server.go:279] https://192.168.61.201:8443/healthz returned 200:
ok
I0127 03:02:26.127247 1119007 api_server.go:141] control plane version: v1.32.1
I0127 03:02:26.127277 1119007 api_server.go:131] duration metric: took 11.792506ms to wait for apiserver health ...
I0127 03:02:26.127289 1119007 system_pods.go:43] waiting for kube-system pods to appear ...
I0127 03:02:26.285021 1119007 system_pods.go:59] 9 kube-system pods found
I0127 03:02:26.285059 1119007 system_pods.go:61] "coredns-668d6bf9bc-86j6q" [9b85ae79-ae19-4cd1-a0da-0343c9e2801c] Running
I0127 03:02:26.285067 1119007 system_pods.go:61] "coredns-668d6bf9bc-fk8cw" [c7075b92-233d-4a5a-b864-ef349d7125e7] Running
I0127 03:02:26.285073 1119007 system_pods.go:61] "etcd-no-preload-887091" [45d4a5fc-797f-4d4a-9204-049ebcdc5647] Running
I0127 03:02:26.285079 1119007 system_pods.go:61] "kube-apiserver-no-preload-887091" [11e7ea14-678a-408f-a722-8fedb984c086] Running
I0127 03:02:26.285085 1119007 system_pods.go:61] "kube-controller-manager-no-preload-887091" [95d63381-33aa-428b-80b1-6e8ccf96b8a1] Running
I0127 03:02:26.285089 1119007 system_pods.go:61] "kube-proxy-45pz6" [b3aa986f-d6d8-4050-8760-438aabd39bdc] Running
I0127 03:02:26.285094 1119007 system_pods.go:61] "kube-scheduler-no-preload-887091" [5065d24f-256d-43ad-bd00-1d5868b7214d] Running
I0127 03:02:26.285104 1119007 system_pods.go:61] "metrics-server-f79f97bbb-vshg4" [33ae36ed-d8a4-4d60-bcd0-1becf2d490bc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0127 03:02:26.285110 1119007 system_pods.go:61] "storage-provisioner" [127a1f13-b70c-4482-bd8b-14a6bf24b663] Running
I0127 03:02:26.285121 1119007 system_pods.go:74] duration metric: took 157.824017ms to wait for pod list to return data ...
I0127 03:02:26.285134 1119007 default_sa.go:34] waiting for default service account to be created ...
I0127 03:02:26.480092 1119007 default_sa.go:45] found service account: "default"
I0127 03:02:26.480128 1119007 default_sa.go:55] duration metric: took 194.984911ms for default service account to be created ...
I0127 03:02:26.480141 1119007 system_pods.go:137] waiting for k8s-apps to be running ...
I0127 03:02:26.688727 1119007 system_pods.go:87] 9 kube-system pods found
I0127 03:02:25.099839 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
I0127 03:02:27.100451 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
I0127 03:02:29.599652 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
I0127 03:02:26.158504 1121411 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.328648156s)
I0127 03:02:26.158550 1121411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0127 03:02:26.404894 1121411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0127 03:02:26.526530 1121411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0127 03:02:26.667432 1121411 api_server.go:52] waiting for apiserver process to appear ...
I0127 03:02:26.667635 1121411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 03:02:27.167965 1121411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 03:02:27.667769 1121411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 03:02:27.702851 1121411 api_server.go:72] duration metric: took 1.03541528s to wait for apiserver process to appear ...
I0127 03:02:27.702957 1121411 api_server.go:88] waiting for apiserver healthz status ...
I0127 03:02:27.702996 1121411 api_server.go:253] Checking apiserver healthz at https://192.168.50.51:8443/healthz ...
I0127 03:02:27.703762 1121411 api_server.go:269] stopped: https://192.168.50.51:8443/healthz: Get "https://192.168.50.51:8443/healthz": dial tcp 192.168.50.51:8443: connect: connection refused
I0127 03:02:28.203377 1121411 api_server.go:253] Checking apiserver healthz at https://192.168.50.51:8443/healthz ...
I0127 03:02:28.204135 1121411 api_server.go:269] stopped: https://192.168.50.51:8443/healthz: Get "https://192.168.50.51:8443/healthz": dial tcp 192.168.50.51:8443: connect: connection refused
I0127 03:02:28.703884 1121411 api_server.go:253] Checking apiserver healthz at https://192.168.50.51:8443/healthz ...
I0127 03:02:32.408333 1119263 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
I0127 03:02:32.408420 1119263 kubeadm.go:310] [preflight] Running pre-flight checks
I0127 03:02:32.408564 1119263 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0127 03:02:32.408723 1119263 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0127 03:02:32.408850 1119263 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0127 03:02:32.408936 1119263 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0127 03:02:32.410600 1119263 out.go:235] - Generating certificates and keys ...
I0127 03:02:32.410694 1119263 kubeadm.go:310] [certs] Using existing ca certificate authority
I0127 03:02:32.410784 1119263 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0127 03:02:32.410899 1119263 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0127 03:02:32.410985 1119263 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
I0127 03:02:32.411061 1119263 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
I0127 03:02:32.411144 1119263 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
I0127 03:02:32.411243 1119263 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
I0127 03:02:32.411349 1119263 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
I0127 03:02:32.411474 1119263 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0127 03:02:32.411592 1119263 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0127 03:02:32.411654 1119263 kubeadm.go:310] [certs] Using the existing "sa" key
I0127 03:02:32.411755 1119263 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0127 03:02:32.411823 1119263 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0127 03:02:32.411900 1119263 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0127 03:02:32.411957 1119263 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0127 03:02:32.412019 1119263 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0127 03:02:32.412077 1119263 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0127 03:02:32.412166 1119263 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0127 03:02:32.412460 1119263 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0127 03:02:32.415088 1119263 out.go:235] - Booting up control plane ...
I0127 03:02:32.415215 1119263 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0127 03:02:32.415349 1119263 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0127 03:02:32.415444 1119263 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0127 03:02:32.415597 1119263 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0127 03:02:32.415722 1119263 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0127 03:02:32.415772 1119263 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0127 03:02:32.415934 1119263 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0127 03:02:32.416041 1119263 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0127 03:02:32.416113 1119263 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001709036s
I0127 03:02:32.416228 1119263 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0127 03:02:32.416326 1119263 kubeadm.go:310] [api-check] The API server is healthy after 6.003070171s
I0127 03:02:32.416466 1119263 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0127 03:02:32.416619 1119263 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0127 03:02:32.416691 1119263 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0127 03:02:32.416890 1119263 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-264552 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0127 03:02:32.416990 1119263 kubeadm.go:310] [bootstrap-token] Using token: glfh41.djplehex31d2nmyn
I0127 03:02:32.418322 1119263 out.go:235] - Configuring RBAC rules ...
I0127 03:02:32.418468 1119263 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0127 03:02:32.418553 1119263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0127 03:02:32.418749 1119263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0127 03:02:32.418932 1119263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0127 03:02:32.419089 1119263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0127 03:02:32.419214 1119263 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0127 03:02:32.419378 1119263 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0127 03:02:32.419436 1119263 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0127 03:02:32.419498 1119263 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0127 03:02:32.419505 1119263 kubeadm.go:310]
I0127 03:02:32.419581 1119263 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0127 03:02:32.419587 1119263 kubeadm.go:310]
I0127 03:02:32.419686 1119263 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0127 03:02:32.419696 1119263 kubeadm.go:310]
I0127 03:02:32.419729 1119263 kubeadm.go:310] mkdir -p $HOME/.kube
I0127 03:02:32.419809 1119263 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0127 03:02:32.419880 1119263 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0127 03:02:32.419891 1119263 kubeadm.go:310]
I0127 03:02:32.419987 1119263 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0127 03:02:32.419998 1119263 kubeadm.go:310]
I0127 03:02:32.420067 1119263 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0127 03:02:32.420078 1119263 kubeadm.go:310]
I0127 03:02:32.420143 1119263 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0127 03:02:32.420236 1119263 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0127 03:02:32.420319 1119263 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0127 03:02:32.420330 1119263 kubeadm.go:310]
I0127 03:02:32.420421 1119263 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0127 03:02:32.420508 1119263 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0127 03:02:32.420519 1119263 kubeadm.go:310]
I0127 03:02:32.420616 1119263 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token glfh41.djplehex31d2nmyn \
I0127 03:02:32.420750 1119263 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:0bba8d4e4b3afb129d2d18e2e045cd48b3419c300ae73ce15b73c31a6c21b1ba \
I0127 03:02:32.420781 1119263 kubeadm.go:310] --control-plane
I0127 03:02:32.420790 1119263 kubeadm.go:310]
I0127 03:02:32.420891 1119263 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0127 03:02:32.420902 1119263 kubeadm.go:310]
I0127 03:02:32.421036 1119263 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token glfh41.djplehex31d2nmyn \
I0127 03:02:32.421192 1119263 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:0bba8d4e4b3afb129d2d18e2e045cd48b3419c300ae73ce15b73c31a6c21b1ba
I0127 03:02:32.421210 1119263 cni.go:84] Creating CNI manager for ""
I0127 03:02:32.421220 1119263 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 03:02:32.422542 1119263 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0127 03:02:30.820769 1121411 api_server.go:279] https://192.168.50.51:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0127 03:02:30.820809 1121411 api_server.go:103] status: https://192.168.50.51:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0127 03:02:30.820827 1121411 api_server.go:253] Checking apiserver healthz at https://192.168.50.51:8443/healthz ...
I0127 03:02:30.840404 1121411 api_server.go:279] https://192.168.50.51:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0127 03:02:30.840436 1121411 api_server.go:103] status: https://192.168.50.51:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0127 03:02:31.203948 1121411 api_server.go:253] Checking apiserver healthz at https://192.168.50.51:8443/healthz ...
I0127 03:02:31.209795 1121411 api_server.go:279] https://192.168.50.51:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0127 03:02:31.209820 1121411 api_server.go:103] status: https://192.168.50.51:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0127 03:02:31.703217 1121411 api_server.go:253] Checking apiserver healthz at https://192.168.50.51:8443/healthz ...
I0127 03:02:31.724822 1121411 api_server.go:279] https://192.168.50.51:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0127 03:02:31.724862 1121411 api_server.go:103] status: https://192.168.50.51:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0127 03:02:32.203446 1121411 api_server.go:253] Checking apiserver healthz at https://192.168.50.51:8443/healthz ...
I0127 03:02:32.210068 1121411 api_server.go:279] https://192.168.50.51:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0127 03:02:32.210100 1121411 api_server.go:103] status: https://192.168.50.51:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0127 03:02:32.703717 1121411 api_server.go:253] Checking apiserver healthz at https://192.168.50.51:8443/healthz ...
I0127 03:02:32.709016 1121411 api_server.go:279] https://192.168.50.51:8443/healthz returned 200:
ok
I0127 03:02:32.719003 1121411 api_server.go:141] control plane version: v1.32.1
I0127 03:02:32.719041 1121411 api_server.go:131] duration metric: took 5.016063652s to wait for apiserver health ...
I0127 03:02:32.719055 1121411 cni.go:84] Creating CNI manager for ""
I0127 03:02:32.719065 1121411 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 03:02:32.721101 1121411 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0127 03:02:32.722433 1121411 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0127 03:02:32.734857 1121411 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0127 03:02:32.761120 1121411 system_pods.go:43] waiting for kube-system pods to appear ...
I0127 03:02:32.778500 1121411 system_pods.go:59] 9 kube-system pods found
I0127 03:02:32.778547 1121411 system_pods.go:61] "coredns-668d6bf9bc-dscrm" [2869a26b-4522-43cd-8417-abc17b77dc7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0127 03:02:32.778558 1121411 system_pods.go:61] "coredns-668d6bf9bc-rcdv8" [7697dd25-c99a-4413-a242-54cca1d1e5e9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0127 03:02:32.778571 1121411 system_pods.go:61] "etcd-newest-cni-642127" [816ba553-68cb-4496-8dba-7839e9799916] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0127 03:02:32.778583 1121411 system_pods.go:61] "kube-apiserver-newest-cni-642127" [69c55a7c-148b-40ff-86ec-739c5f668a11] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0127 03:02:32.778596 1121411 system_pods.go:61] "kube-controller-manager-newest-cni-642127" [1b6d1085-c4fe-43f2-a9ab-320adeb6cd38] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0127 03:02:32.778608 1121411 system_pods.go:61] "kube-proxy-5q7mp" [1efd4424-3475-45e1-b80b-c941de90e34d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0127 03:02:32.778620 1121411 system_pods.go:61] "kube-scheduler-newest-cni-642127" [f1c81d74-2818-4093-9e69-19359cc3ff50] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0127 03:02:32.778631 1121411 system_pods.go:61] "metrics-server-f79f97bbb-47hqq" [7f6ccb13-e73f-4514-a639-e1297b545cf6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0127 03:02:32.778642 1121411 system_pods.go:61] "storage-provisioner" [ee06d1e8-0ae7-42c7-9c5b-d19fcfb83f40] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0127 03:02:32.778653 1121411 system_pods.go:74] duration metric: took 17.501517ms to wait for pod list to return data ...
I0127 03:02:32.778667 1121411 node_conditions.go:102] verifying NodePressure condition ...
I0127 03:02:32.783164 1121411 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0127 03:02:32.783201 1121411 node_conditions.go:123] node cpu capacity is 2
I0127 03:02:32.783216 1121411 node_conditions.go:105] duration metric: took 4.539816ms to run NodePressure ...
I0127 03:02:32.783239 1121411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0127 03:02:33.135340 1121411 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0127 03:02:33.148690 1121411 ops.go:34] apiserver oom_adj: -16
I0127 03:02:33.148723 1121411 kubeadm.go:597] duration metric: took 8.771274475s to restartPrimaryControlPlane
I0127 03:02:33.148739 1121411 kubeadm.go:394] duration metric: took 8.851928105s to StartCluster
I0127 03:02:33.148766 1121411 settings.go:142] acquiring lock: {Name:mkfac79776d8549aa482287d1af528efdec15d32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 03:02:33.148862 1121411 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20316-1057178/kubeconfig
I0127 03:02:33.150733 1121411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-1057178/kubeconfig: {Name:mke4bd9fc891569e5d6830fdf173fa5043f6c0a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 03:02:33.150984 1121411 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.51 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0127 03:02:33.151079 1121411 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0127 03:02:33.151202 1121411 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-642127"
I0127 03:02:33.151222 1121411 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-642127"
W0127 03:02:33.151238 1121411 addons.go:247] addon storage-provisioner should already be in state true
I0127 03:02:33.151257 1121411 addons.go:69] Setting metrics-server=true in profile "newest-cni-642127"
I0127 03:02:33.151258 1121411 addons.go:69] Setting default-storageclass=true in profile "newest-cni-642127"
I0127 03:02:33.151284 1121411 addons.go:238] Setting addon metrics-server=true in "newest-cni-642127"
I0127 03:02:33.151272 1121411 addons.go:69] Setting dashboard=true in profile "newest-cni-642127"
W0127 03:02:33.151294 1121411 addons.go:247] addon metrics-server should already be in state true
I0127 03:02:33.151294 1121411 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-642127"
I0127 03:02:33.151315 1121411 addons.go:238] Setting addon dashboard=true in "newest-cni-642127"
I0127 03:02:33.151313 1121411 config.go:182] Loaded profile config "newest-cni-642127": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
W0127 03:02:33.151325 1121411 addons.go:247] addon dashboard should already be in state true
I0127 03:02:33.151330 1121411 host.go:66] Checking if "newest-cni-642127" exists ...
I0127 03:02:33.151355 1121411 host.go:66] Checking if "newest-cni-642127" exists ...
I0127 03:02:33.151285 1121411 host.go:66] Checking if "newest-cni-642127" exists ...
I0127 03:02:33.151717 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:33.151747 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:33.151754 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:33.151760 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:33.151789 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:33.151793 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:33.151825 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:33.151865 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:33.152612 1121411 out.go:177] * Verifying Kubernetes components...
I0127 03:02:33.154050 1121411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 03:02:33.169429 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33289
I0127 03:02:33.169982 1121411 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:33.170451 1121411 main.go:141] libmachine: Using API Version 1
I0127 03:02:33.170472 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:33.170815 1121411 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:33.171371 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36587
I0127 03:02:33.171487 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:33.171528 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:33.171747 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39453
I0127 03:02:33.171942 1121411 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:33.172289 1121411 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:33.172471 1121411 main.go:141] libmachine: Using API Version 1
I0127 03:02:33.172498 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:33.172746 1121411 main.go:141] libmachine: Using API Version 1
I0127 03:02:33.172766 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:33.172908 1121411 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:33.174172 1121411 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:33.174237 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetState
I0127 03:02:33.175157 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45067
I0127 03:02:33.175572 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:33.175616 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:33.175822 1121411 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:33.177792 1121411 addons.go:238] Setting addon default-storageclass=true in "newest-cni-642127"
W0127 03:02:33.177817 1121411 addons.go:247] addon default-storageclass should already be in state true
I0127 03:02:33.177848 1121411 host.go:66] Checking if "newest-cni-642127" exists ...
I0127 03:02:33.178206 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:33.178256 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:33.178862 1121411 main.go:141] libmachine: Using API Version 1
I0127 03:02:33.178892 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:33.179421 1121411 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:33.192581 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38383
I0127 03:02:33.193097 1121411 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:33.193643 1121411 main.go:141] libmachine: Using API Version 1
I0127 03:02:33.193668 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:33.194026 1121411 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:33.194248 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetState
I0127 03:02:33.197497 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
I0127 03:02:33.199029 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36881
I0127 03:02:33.199688 1121411 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:33.199789 1121411 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0127 03:02:33.200189 1121411 main.go:141] libmachine: Using API Version 1
I0127 03:02:33.200217 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:33.200630 1121411 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:33.200826 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetState
I0127 03:02:33.201177 1121411 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0127 03:02:33.201196 1121411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0127 03:02:33.201215 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
I0127 03:02:33.201773 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:33.201821 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:33.203099 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
I0127 03:02:33.204646 1121411 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0127 03:02:33.205709 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:33.206717 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
I0127 03:02:33.206782 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:33.207074 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
I0127 03:02:33.207272 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
I0127 03:02:33.207453 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
I0127 03:02:33.207613 1121411 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa Username:docker}
I0127 03:02:33.208044 1121411 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0127 03:02:33.209101 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0127 03:02:33.209120 1121411 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0127 03:02:33.209140 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
I0127 03:02:33.212709 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:33.213133 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
I0127 03:02:33.213153 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:33.213451 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
I0127 03:02:33.213632 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
I0127 03:02:33.213734 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
I0127 03:02:33.213819 1121411 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa Username:docker}
I0127 03:02:33.219861 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43957
I0127 03:02:33.220403 1121411 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:33.220991 1121411 main.go:141] libmachine: Using API Version 1
I0127 03:02:33.221024 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:33.221408 1121411 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:33.222196 1121411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:33.222254 1121411 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:33.223731 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42359
I0127 03:02:33.224051 1121411 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:33.224552 1121411 main.go:141] libmachine: Using API Version 1
I0127 03:02:33.224573 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:33.224816 1121411 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:33.225077 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetState
I0127 03:02:33.227906 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
I0127 03:02:33.229635 1121411 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0127 03:02:32.423722 1119263 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0127 03:02:32.436568 1119263 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0127 03:02:32.461950 1119263 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0127 03:02:32.462072 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 03:02:32.462109 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-264552 minikube.k8s.io/updated_at=2025_01_27T03_02_32_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95 minikube.k8s.io/name=embed-certs-264552 minikube.k8s.io/primary=true
I0127 03:02:32.477721 1119263 ops.go:34] apiserver oom_adj: -16
I0127 03:02:32.739220 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 03:02:33.239786 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 03:02:33.740039 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 03:02:34.239291 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 03:02:34.740312 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 03:02:31.600099 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
I0127 03:02:33.600177 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
I0127 03:02:33.231071 1121411 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0127 03:02:33.231090 1121411 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0127 03:02:33.231112 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
I0127 03:02:33.233979 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:33.234359 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
I0127 03:02:33.234412 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:33.234633 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
I0127 03:02:33.234777 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
I0127 03:02:33.234927 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
I0127 03:02:33.235147 1121411 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa Username:docker}
I0127 03:02:33.243914 1121411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40181
I0127 03:02:33.244332 1121411 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:33.244875 1121411 main.go:141] libmachine: Using API Version 1
I0127 03:02:33.244889 1121411 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:33.245272 1121411 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:33.245443 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetState
I0127 03:02:33.247204 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .DriverName
I0127 03:02:33.247418 1121411 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0127 03:02:33.247429 1121411 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0127 03:02:33.247455 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHHostname
I0127 03:02:33.250553 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:33.251030 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:c0:f5", ip: ""} in network mk-newest-cni-642127: {Iface:virbr2 ExpiryTime:2025-01-27 04:02:13 +0000 UTC Type:0 Mac:52:54:00:b2:c0:f5 Iaid: IPaddr:192.168.50.51 Prefix:24 Hostname:newest-cni-642127 Clientid:01:52:54:00:b2:c0:f5}
I0127 03:02:33.251045 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | domain newest-cni-642127 has defined IP address 192.168.50.51 and MAC address 52:54:00:b2:c0:f5 in network mk-newest-cni-642127
I0127 03:02:33.251208 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHPort
I0127 03:02:33.251359 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHKeyPath
I0127 03:02:33.251505 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .GetSSHUsername
I0127 03:02:33.251611 1121411 sshutil.go:53] new ssh client: &{IP:192.168.50.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/newest-cni-642127/id_rsa Username:docker}
I0127 03:02:33.375505 1121411 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0127 03:02:33.394405 1121411 api_server.go:52] waiting for apiserver process to appear ...
I0127 03:02:33.394507 1121411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 03:02:33.410947 1121411 api_server.go:72] duration metric: took 259.928237ms to wait for apiserver process to appear ...
I0127 03:02:33.410983 1121411 api_server.go:88] waiting for apiserver healthz status ...
I0127 03:02:33.411005 1121411 api_server.go:253] Checking apiserver healthz at https://192.168.50.51:8443/healthz ...
I0127 03:02:33.416758 1121411 api_server.go:279] https://192.168.50.51:8443/healthz returned 200:
ok
I0127 03:02:33.418367 1121411 api_server.go:141] control plane version: v1.32.1
I0127 03:02:33.418395 1121411 api_server.go:131] duration metric: took 7.402525ms to wait for apiserver health ...
I0127 03:02:33.418407 1121411 system_pods.go:43] waiting for kube-system pods to appear ...
I0127 03:02:33.424893 1121411 system_pods.go:59] 9 kube-system pods found
I0127 03:02:33.424921 1121411 system_pods.go:61] "coredns-668d6bf9bc-dscrm" [2869a26b-4522-43cd-8417-abc17b77dc7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0127 03:02:33.424928 1121411 system_pods.go:61] "coredns-668d6bf9bc-rcdv8" [7697dd25-c99a-4413-a242-54cca1d1e5e9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0127 03:02:33.424936 1121411 system_pods.go:61] "etcd-newest-cni-642127" [816ba553-68cb-4496-8dba-7839e9799916] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0127 03:02:33.424965 1121411 system_pods.go:61] "kube-apiserver-newest-cni-642127" [69c55a7c-148b-40ff-86ec-739c5f668a11] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0127 03:02:33.424984 1121411 system_pods.go:61] "kube-controller-manager-newest-cni-642127" [1b6d1085-c4fe-43f2-a9ab-320adeb6cd38] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0127 03:02:33.424992 1121411 system_pods.go:61] "kube-proxy-5q7mp" [1efd4424-3475-45e1-b80b-c941de90e34d] Running
I0127 03:02:33.424997 1121411 system_pods.go:61] "kube-scheduler-newest-cni-642127" [f1c81d74-2818-4093-9e69-19359cc3ff50] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0127 03:02:33.425005 1121411 system_pods.go:61] "metrics-server-f79f97bbb-47hqq" [7f6ccb13-e73f-4514-a639-e1297b545cf6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0127 03:02:33.425009 1121411 system_pods.go:61] "storage-provisioner" [ee06d1e8-0ae7-42c7-9c5b-d19fcfb83f40] Running
I0127 03:02:33.425017 1121411 system_pods.go:74] duration metric: took 6.604015ms to wait for pod list to return data ...
I0127 03:02:33.425027 1121411 default_sa.go:34] waiting for default service account to be created ...
I0127 03:02:33.427992 1121411 default_sa.go:45] found service account: "default"
I0127 03:02:33.428016 1121411 default_sa.go:55] duration metric: took 2.981475ms for default service account to be created ...
I0127 03:02:33.428030 1121411 kubeadm.go:582] duration metric: took 277.019922ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
I0127 03:02:33.428053 1121411 node_conditions.go:102] verifying NodePressure condition ...
I0127 03:02:33.431283 1121411 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0127 03:02:33.431303 1121411 node_conditions.go:123] node cpu capacity is 2
I0127 03:02:33.431313 1121411 node_conditions.go:105] duration metric: took 3.254985ms to run NodePressure ...
I0127 03:02:33.431324 1121411 start.go:241] waiting for startup goroutines ...
I0127 03:02:33.462238 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0127 03:02:33.462261 1121411 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0127 03:02:33.476129 1121411 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0127 03:02:33.476162 1121411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0127 03:02:33.488754 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0127 03:02:33.488789 1121411 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0127 03:02:33.497073 1121411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0127 03:02:33.519522 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0127 03:02:33.519557 1121411 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0127 03:02:33.551868 1121411 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0127 03:02:33.551905 1121411 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0127 03:02:33.565343 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0127 03:02:33.565374 1121411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0127 03:02:33.600695 1121411 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0127 03:02:33.600720 1121411 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0127 03:02:33.602150 1121411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0127 03:02:33.632660 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0127 03:02:33.632694 1121411 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0127 03:02:33.652690 1121411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0127 03:02:33.705754 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0127 03:02:33.705786 1121411 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0127 03:02:33.793208 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0127 03:02:33.793261 1121411 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0127 03:02:33.881849 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0127 03:02:33.881884 1121411 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0127 03:02:33.979510 1121411 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0127 03:02:33.979542 1121411 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0127 03:02:34.040605 1121411 main.go:141] libmachine: Making call to close driver server
I0127 03:02:34.040637 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Close
I0127 03:02:34.041032 1121411 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:34.041080 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | Closing plugin on server side
I0127 03:02:34.041090 1121411 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:34.041113 1121411 main.go:141] libmachine: Making call to close driver server
I0127 03:02:34.041137 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Close
I0127 03:02:34.041431 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | Closing plugin on server side
I0127 03:02:34.041481 1121411 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:34.041493 1121411 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:34.058399 1121411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0127 03:02:34.104645 1121411 main.go:141] libmachine: Making call to close driver server
I0127 03:02:34.104666 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Close
I0127 03:02:34.104999 1121411 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:34.105025 1121411 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:34.105046 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | Closing plugin on server side
I0127 03:02:35.194812 1121411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.542086223s)
I0127 03:02:35.194884 1121411 main.go:141] libmachine: Making call to close driver server
I0127 03:02:35.194899 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Close
I0127 03:02:35.194665 1121411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.592471736s)
I0127 03:02:35.194995 1121411 main.go:141] libmachine: Making call to close driver server
I0127 03:02:35.195010 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Close
I0127 03:02:35.197298 1121411 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:35.197320 1121411 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:35.197331 1121411 main.go:141] libmachine: Making call to close driver server
I0127 03:02:35.197338 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Close
I0127 03:02:35.197484 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | Closing plugin on server side
I0127 03:02:35.197524 1121411 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:35.197543 1121411 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:35.197551 1121411 main.go:141] libmachine: Making call to close driver server
I0127 03:02:35.197563 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Close
I0127 03:02:35.197565 1121411 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:35.197575 1121411 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:35.197591 1121411 addons.go:479] Verifying addon metrics-server=true in "newest-cni-642127"
I0127 03:02:35.197806 1121411 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:35.197821 1121411 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:35.738350 1121411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.679893698s)
I0127 03:02:35.738414 1121411 main.go:141] libmachine: Making call to close driver server
I0127 03:02:35.738431 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Close
I0127 03:02:35.738859 1121411 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:35.738880 1121411 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:35.738897 1121411 main.go:141] libmachine: Making call to close driver server
I0127 03:02:35.738906 1121411 main.go:141] libmachine: (newest-cni-642127) Calling .Close
I0127 03:02:35.739194 1121411 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:35.739211 1121411 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:35.739256 1121411 main.go:141] libmachine: (newest-cni-642127) DBG | Closing plugin on server side
I0127 03:02:35.740543 1121411 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p newest-cni-642127 addons enable metrics-server
I0127 03:02:35.742112 1121411 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
I0127 03:02:35.743312 1121411 addons.go:514] duration metric: took 2.592255359s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
I0127 03:02:35.743356 1121411 start.go:246] waiting for cluster config update ...
I0127 03:02:35.743372 1121411 start.go:255] writing updated cluster config ...
I0127 03:02:35.743643 1121411 ssh_runner.go:195] Run: rm -f paused
I0127 03:02:35.802583 1121411 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
I0127 03:02:35.804271 1121411 out.go:177] * Done! kubectl is now configured to use "newest-cni-642127" cluster and "default" namespace by default
I0127 03:02:35.240046 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 03:02:35.739577 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 03:02:36.239666 1119263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 03:02:36.396540 1119263 kubeadm.go:1113] duration metric: took 3.934543669s to wait for elevateKubeSystemPrivileges
I0127 03:02:36.396587 1119263 kubeadm.go:394] duration metric: took 4m36.765414047s to StartCluster
I0127 03:02:36.396612 1119263 settings.go:142] acquiring lock: {Name:mkfac79776d8549aa482287d1af528efdec15d32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 03:02:36.396700 1119263 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20316-1057178/kubeconfig
I0127 03:02:36.399283 1119263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-1057178/kubeconfig: {Name:mke4bd9fc891569e5d6830fdf173fa5043f6c0a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 03:02:36.399607 1119263 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.145 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0127 03:02:36.399896 1119263 config.go:182] Loaded profile config "embed-certs-264552": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 03:02:36.399967 1119263 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0127 03:02:36.400065 1119263 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-264552"
I0127 03:02:36.400097 1119263 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-264552"
W0127 03:02:36.400111 1119263 addons.go:247] addon storage-provisioner should already be in state true
I0127 03:02:36.400147 1119263 host.go:66] Checking if "embed-certs-264552" exists ...
I0127 03:02:36.400364 1119263 addons.go:69] Setting default-storageclass=true in profile "embed-certs-264552"
I0127 03:02:36.400393 1119263 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-264552"
I0127 03:02:36.400697 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:36.400746 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:36.400860 1119263 addons.go:69] Setting dashboard=true in profile "embed-certs-264552"
I0127 03:02:36.400889 1119263 addons.go:238] Setting addon dashboard=true in "embed-certs-264552"
I0127 03:02:36.400891 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
W0127 03:02:36.400899 1119263 addons.go:247] addon dashboard should already be in state true
I0127 03:02:36.400934 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:36.400962 1119263 host.go:66] Checking if "embed-certs-264552" exists ...
I0127 03:02:36.401007 1119263 addons.go:69] Setting metrics-server=true in profile "embed-certs-264552"
I0127 03:02:36.401034 1119263 addons.go:238] Setting addon metrics-server=true in "embed-certs-264552"
W0127 03:02:36.401044 1119263 addons.go:247] addon metrics-server should already be in state true
I0127 03:02:36.401078 1119263 host.go:66] Checking if "embed-certs-264552" exists ...
I0127 03:02:36.401508 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:36.401557 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:36.401777 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:36.401824 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:36.401991 1119263 out.go:177] * Verifying Kubernetes components...
I0127 03:02:36.403910 1119263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 03:02:36.422683 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32813
I0127 03:02:36.423177 1119263 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:36.423824 1119263 main.go:141] libmachine: Using API Version 1
I0127 03:02:36.423851 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:36.424298 1119263 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:36.424516 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetState
I0127 03:02:36.425635 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45749
I0127 03:02:36.425994 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40999
I0127 03:02:36.426142 1119263 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:36.426423 1119263 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:36.426703 1119263 main.go:141] libmachine: Using API Version 1
I0127 03:02:36.426729 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:36.427088 1119263 main.go:141] libmachine: Using API Version 1
I0127 03:02:36.427111 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:36.427288 1119263 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:36.427869 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:36.427910 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:36.429980 1119263 addons.go:238] Setting addon default-storageclass=true in "embed-certs-264552"
W0127 03:02:36.429999 1119263 addons.go:247] addon default-storageclass should already be in state true
I0127 03:02:36.430029 1119263 host.go:66] Checking if "embed-certs-264552" exists ...
I0127 03:02:36.430409 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:36.430443 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:36.430902 1119263 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:36.431582 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:36.431620 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:36.449634 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42081
I0127 03:02:36.450301 1119263 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:36.451062 1119263 main.go:141] libmachine: Using API Version 1
I0127 03:02:36.451085 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:36.451525 1119263 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:36.452191 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:36.452239 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:36.455086 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40299
I0127 03:02:36.455301 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41491
I0127 03:02:36.455535 1119263 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:36.456246 1119263 main.go:141] libmachine: Using API Version 1
I0127 03:02:36.456264 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:36.456672 1119263 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:36.456898 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetState
I0127 03:02:36.458545 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37933
I0127 03:02:36.459300 1119263 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:36.459602 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .DriverName
I0127 03:02:36.460164 1119263 main.go:141] libmachine: Using API Version 1
I0127 03:02:36.460195 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:36.461041 1119263 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:36.461379 1119263 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:36.461672 1119263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:36.461676 1119263 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0127 03:02:36.461723 1119263 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:36.461915 1119263 main.go:141] libmachine: Using API Version 1
I0127 03:02:36.461930 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:36.462520 1119263 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:36.462923 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetState
I0127 03:02:36.465082 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .DriverName
I0127 03:02:36.465338 1119263 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0127 03:02:36.466448 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0127 03:02:36.466474 1119263 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0127 03:02:36.466495 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHHostname
I0127 03:02:36.466570 1119263 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0127 03:02:36.468155 1119263 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0127 03:02:36.468187 1119263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0127 03:02:36.468209 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHHostname
I0127 03:02:36.470910 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
I0127 03:02:36.471779 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:7a:0a", ip: ""} in network mk-embed-certs-264552: {Iface:virbr1 ExpiryTime:2025-01-27 03:57:49 +0000 UTC Type:0 Mac:52:54:00:89:7a:0a Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:embed-certs-264552 Clientid:01:52:54:00:89:7a:0a}
I0127 03:02:36.471818 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined IP address 192.168.39.145 and MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
I0127 03:02:36.472039 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHPort
I0127 03:02:36.472253 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHKeyPath
I0127 03:02:36.472399 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHUsername
I0127 03:02:36.472572 1119263 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/embed-certs-264552/id_rsa Username:docker}
I0127 03:02:36.475423 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34703
I0127 03:02:36.476153 1119263 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:36.476804 1119263 main.go:141] libmachine: Using API Version 1
I0127 03:02:36.476823 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:36.477245 1119263 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:36.477505 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetState
I0127 03:02:36.479472 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .DriverName
I0127 03:02:36.481333 1119263 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0127 03:02:36.481739 1119263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40689
I0127 03:02:36.482275 1119263 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:36.482837 1119263 main.go:141] libmachine: Using API Version 1
I0127 03:02:36.482854 1119263 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:36.482868 1119263 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0127 03:02:36.482887 1119263 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0127 03:02:36.482910 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHHostname
I0127 03:02:36.483231 1119263 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:36.483493 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetState
I0127 03:02:36.486181 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .DriverName
I0127 03:02:36.486454 1119263 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0127 03:02:36.486475 1119263 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0127 03:02:36.486493 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHHostname
I0127 03:02:36.488039 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
I0127 03:02:36.488500 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:7a:0a", ip: ""} in network mk-embed-certs-264552: {Iface:virbr1 ExpiryTime:2025-01-27 03:57:49 +0000 UTC Type:0 Mac:52:54:00:89:7a:0a Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:embed-certs-264552 Clientid:01:52:54:00:89:7a:0a}
I0127 03:02:36.488532 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined IP address 192.168.39.145 and MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
I0127 03:02:36.488756 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHPort
I0127 03:02:36.488966 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHKeyPath
I0127 03:02:36.489130 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHUsername
I0127 03:02:36.489289 1119263 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/embed-certs-264552/id_rsa Username:docker}
I0127 03:02:36.489612 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
I0127 03:02:36.489866 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:7a:0a", ip: ""} in network mk-embed-certs-264552: {Iface:virbr1 ExpiryTime:2025-01-27 03:57:49 +0000 UTC Type:0 Mac:52:54:00:89:7a:0a Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:embed-certs-264552 Clientid:01:52:54:00:89:7a:0a}
I0127 03:02:36.489889 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined IP address 192.168.39.145 and MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
I0127 03:02:36.490026 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHPort
I0127 03:02:36.490149 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHKeyPath
I0127 03:02:36.490261 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHUsername
I0127 03:02:36.490344 1119263 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/embed-certs-264552/id_rsa Username:docker}
I0127 03:02:36.494271 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
I0127 03:02:36.494636 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:7a:0a", ip: ""} in network mk-embed-certs-264552: {Iface:virbr1 ExpiryTime:2025-01-27 03:57:49 +0000 UTC Type:0 Mac:52:54:00:89:7a:0a Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:embed-certs-264552 Clientid:01:52:54:00:89:7a:0a}
I0127 03:02:36.494659 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | domain embed-certs-264552 has defined IP address 192.168.39.145 and MAC address 52:54:00:89:7a:0a in network mk-embed-certs-264552
I0127 03:02:36.495050 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHPort
I0127 03:02:36.495292 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHKeyPath
I0127 03:02:36.495511 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .GetSSHUsername
I0127 03:02:36.495682 1119263 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/embed-certs-264552/id_rsa Username:docker}
I0127 03:02:36.737773 1119263 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0127 03:02:36.826450 1119263 node_ready.go:35] waiting up to 6m0s for node "embed-certs-264552" to be "Ready" ...
I0127 03:02:36.857580 1119263 node_ready.go:49] node "embed-certs-264552" has status "Ready":"True"
I0127 03:02:36.857609 1119263 node_ready.go:38] duration metric: took 31.04815ms for node "embed-certs-264552" to be "Ready" ...
I0127 03:02:36.857623 1119263 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 03:02:36.873458 1119263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0127 03:02:36.877540 1119263 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-mbkl2" in "kube-system" namespace to be "Ready" ...
I0127 03:02:36.957829 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0127 03:02:36.957866 1119263 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0127 03:02:37.005603 1119263 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0127 03:02:37.005635 1119263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0127 03:02:37.006377 1119263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0127 03:02:37.031565 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0127 03:02:37.031587 1119263 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0127 03:02:37.100245 1119263 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0127 03:02:37.100282 1119263 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0127 03:02:37.175281 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0127 03:02:37.175309 1119263 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0127 03:02:37.221791 1119263 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0127 03:02:37.221825 1119263 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0127 03:02:37.308268 1119263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0127 03:02:37.423632 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0127 03:02:37.423660 1119263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0127 03:02:37.588554 1119263 main.go:141] libmachine: Making call to close driver server
I0127 03:02:37.588586 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
I0127 03:02:37.589111 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | Closing plugin on server side
I0127 03:02:37.589130 1119263 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:37.589147 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:37.589162 1119263 main.go:141] libmachine: Making call to close driver server
I0127 03:02:37.589176 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
I0127 03:02:37.589462 1119263 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:37.589483 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:37.634711 1119263 main.go:141] libmachine: Making call to close driver server
I0127 03:02:37.634744 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
I0127 03:02:37.635023 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | Closing plugin on server side
I0127 03:02:37.635065 1119263 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:37.635073 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:37.649206 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0127 03:02:37.649231 1119263 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0127 03:02:37.780671 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0127 03:02:37.780709 1119263 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0127 03:02:37.963118 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0127 03:02:37.963151 1119263 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0127 03:02:38.051717 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0127 03:02:38.051755 1119263 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0127 03:02:38.102698 1119263 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0127 03:02:38.102726 1119263 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0127 03:02:38.177754 1119263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0127 03:02:38.867496 1119263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.861076308s)
I0127 03:02:38.867579 1119263 main.go:141] libmachine: Making call to close driver server
I0127 03:02:38.867594 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
I0127 03:02:38.868010 1119263 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:38.868037 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:38.868054 1119263 main.go:141] libmachine: Making call to close driver server
I0127 03:02:38.868067 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
I0127 03:02:38.868377 1119263 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:38.868397 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:38.923746 1119263 pod_ready.go:103] pod "coredns-668d6bf9bc-mbkl2" in "kube-system" namespace has status "Ready":"False"
I0127 03:02:38.991645 1119263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.683326945s)
I0127 03:02:38.991708 1119263 main.go:141] libmachine: Making call to close driver server
I0127 03:02:38.991728 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
I0127 03:02:38.992116 1119263 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:38.992137 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:38.992146 1119263 main.go:141] libmachine: Making call to close driver server
I0127 03:02:38.992153 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
I0127 03:02:38.992566 1119263 main.go:141] libmachine: (embed-certs-264552) DBG | Closing plugin on server side
I0127 03:02:38.992598 1119263 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:38.992624 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:38.992643 1119263 addons.go:479] Verifying addon metrics-server=true in "embed-certs-264552"
I0127 03:02:39.990731 1119263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.812917797s)
I0127 03:02:39.990802 1119263 main.go:141] libmachine: Making call to close driver server
I0127 03:02:39.990818 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
I0127 03:02:39.991192 1119263 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:39.991223 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:39.991235 1119263 main.go:141] libmachine: Making call to close driver server
I0127 03:02:39.991246 1119263 main.go:141] libmachine: (embed-certs-264552) Calling .Close
I0127 03:02:39.991554 1119263 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:39.991575 1119263 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:39.993095 1119263 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p embed-certs-264552 addons enable metrics-server
I0127 03:02:39.994564 1119263 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
I0127 03:02:35.602346 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
I0127 03:02:38.100810 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
I0127 03:02:39.995898 1119263 addons.go:514] duration metric: took 3.595931069s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
I0127 03:02:40.888544 1119263 pod_ready.go:93] pod "coredns-668d6bf9bc-mbkl2" in "kube-system" namespace has status "Ready":"True"
I0127 03:02:40.888568 1119263 pod_ready.go:82] duration metric: took 4.01099998s for pod "coredns-668d6bf9bc-mbkl2" in "kube-system" namespace to be "Ready" ...
I0127 03:02:40.888579 1119263 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-n5wn4" in "kube-system" namespace to be "Ready" ...
I0127 03:02:40.895910 1119263 pod_ready.go:93] pod "coredns-668d6bf9bc-n5wn4" in "kube-system" namespace has status "Ready":"True"
I0127 03:02:40.895941 1119263 pod_ready.go:82] duration metric: took 7.354168ms for pod "coredns-668d6bf9bc-n5wn4" in "kube-system" namespace to be "Ready" ...
I0127 03:02:40.895955 1119263 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
I0127 03:02:40.900393 1119263 pod_ready.go:93] pod "etcd-embed-certs-264552" in "kube-system" namespace has status "Ready":"True"
I0127 03:02:40.900415 1119263 pod_ready.go:82] duration metric: took 4.45357ms for pod "etcd-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
I0127 03:02:40.900426 1119263 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
I0127 03:02:40.908664 1119263 pod_ready.go:93] pod "kube-apiserver-embed-certs-264552" in "kube-system" namespace has status "Ready":"True"
I0127 03:02:40.908686 1119263 pod_ready.go:82] duration metric: took 8.251039ms for pod "kube-apiserver-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
I0127 03:02:40.908697 1119263 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
I0127 03:02:42.917072 1119263 pod_ready.go:103] pod "kube-controller-manager-embed-certs-264552" in "kube-system" namespace has status "Ready":"False"
I0127 03:02:44.927051 1119263 pod_ready.go:93] pod "kube-controller-manager-embed-certs-264552" in "kube-system" namespace has status "Ready":"True"
I0127 03:02:44.927083 1119263 pod_ready.go:82] duration metric: took 4.01837775s for pod "kube-controller-manager-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
I0127 03:02:44.927096 1119263 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kwqqr" in "kube-system" namespace to be "Ready" ...
I0127 03:02:44.939727 1119263 pod_ready.go:93] pod "kube-proxy-kwqqr" in "kube-system" namespace has status "Ready":"True"
I0127 03:02:44.939759 1119263 pod_ready.go:82] duration metric: took 12.654042ms for pod "kube-proxy-kwqqr" in "kube-system" namespace to be "Ready" ...
I0127 03:02:44.939772 1119263 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
I0127 03:02:44.966136 1119263 pod_ready.go:93] pod "kube-scheduler-embed-certs-264552" in "kube-system" namespace has status "Ready":"True"
I0127 03:02:44.966165 1119263 pod_ready.go:82] duration metric: took 26.38251ms for pod "kube-scheduler-embed-certs-264552" in "kube-system" namespace to be "Ready" ...
I0127 03:02:44.966178 1119263 pod_ready.go:39] duration metric: took 8.108541494s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 03:02:44.966199 1119263 api_server.go:52] waiting for apiserver process to appear ...
I0127 03:02:44.966260 1119263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 03:02:40.598596 1119269 pod_ready.go:103] pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace has status "Ready":"False"
I0127 03:02:41.593185 1119269 pod_ready.go:82] duration metric: took 4m0.0010842s for pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace to be "Ready" ...
E0127 03:02:41.593221 1119269 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-8skhl" in "kube-system" namespace to be "Ready" (will not retry!)
I0127 03:02:41.593251 1119269 pod_ready.go:39] duration metric: took 4m13.044846596s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 03:02:41.593292 1119269 kubeadm.go:597] duration metric: took 4m21.461431723s to restartPrimaryControlPlane
W0127 03:02:41.593372 1119269 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
I0127 03:02:41.593408 1119269 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0127 03:02:43.620030 1119269 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (2.026590178s)
I0127 03:02:43.620115 1119269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0127 03:02:43.639142 1119269 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0127 03:02:43.651292 1119269 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0127 03:02:43.661667 1119269 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0127 03:02:43.661687 1119269 kubeadm.go:157] found existing configuration files:
I0127 03:02:43.661733 1119269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
I0127 03:02:43.672110 1119269 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0127 03:02:43.672165 1119269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0127 03:02:43.683718 1119269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
I0127 03:02:43.693914 1119269 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0127 03:02:43.693983 1119269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0127 03:02:43.704250 1119269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
I0127 03:02:43.714202 1119269 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0127 03:02:43.714283 1119269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0127 03:02:43.724775 1119269 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
I0127 03:02:43.734789 1119269 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0127 03:02:43.734857 1119269 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0127 03:02:43.746079 1119269 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0127 03:02:43.925921 1119269 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0127 03:02:44.991380 1119263 api_server.go:72] duration metric: took 8.59171979s to wait for apiserver process to appear ...
I0127 03:02:44.991410 1119263 api_server.go:88] waiting for apiserver healthz status ...
I0127 03:02:44.991439 1119263 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8443/healthz ...
I0127 03:02:44.997033 1119263 api_server.go:279] https://192.168.39.145:8443/healthz returned 200:
ok
I0127 03:02:44.998283 1119263 api_server.go:141] control plane version: v1.32.1
I0127 03:02:44.998310 1119263 api_server.go:131] duration metric: took 6.891198ms to wait for apiserver health ...
I0127 03:02:44.998321 1119263 system_pods.go:43] waiting for kube-system pods to appear ...
I0127 03:02:45.087014 1119263 system_pods.go:59] 9 kube-system pods found
I0127 03:02:45.087059 1119263 system_pods.go:61] "coredns-668d6bf9bc-mbkl2" [29059a1e-4228-4fbc-bf18-0de800cbb47a] Running
I0127 03:02:45.087067 1119263 system_pods.go:61] "coredns-668d6bf9bc-n5wn4" [416b5ae4-f786-4b1e-a699-d688b967a6f4] Running
I0127 03:02:45.087073 1119263 system_pods.go:61] "etcd-embed-certs-264552" [b2389caf-28fb-42d8-9912-8c3829f8bfd6] Running
I0127 03:02:45.087079 1119263 system_pods.go:61] "kube-apiserver-embed-certs-264552" [0150043f-38b8-4946-84f1-0c9c7aaf7328] Running
I0127 03:02:45.087084 1119263 system_pods.go:61] "kube-controller-manager-embed-certs-264552" [940554f4-564d-4939-a09a-0ea61e36ff6c] Running
I0127 03:02:45.087090 1119263 system_pods.go:61] "kube-proxy-kwqqr" [85b35a19-646d-43a8-b90f-c5a5b4a93393] Running
I0127 03:02:45.087096 1119263 system_pods.go:61] "kube-scheduler-embed-certs-264552" [4a578d9d-f487-4839-a23d-1ec267612f0d] Running
I0127 03:02:45.087106 1119263 system_pods.go:61] "metrics-server-f79f97bbb-6dg5x" [4b9cd5d7-1055-45ea-8ac9-1a91b9246c0e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0127 03:02:45.087114 1119263 system_pods.go:61] "storage-provisioner" [4e4e1f9a-505b-4ed2-ad81-5543176f645a] Running
I0127 03:02:45.087123 1119263 system_pods.go:74] duration metric: took 88.795129ms to wait for pod list to return data ...
I0127 03:02:45.087134 1119263 default_sa.go:34] waiting for default service account to be created ...
I0127 03:02:45.282547 1119263 default_sa.go:45] found service account: "default"
I0127 03:02:45.282578 1119263 default_sa.go:55] duration metric: took 195.436382ms for default service account to be created ...
I0127 03:02:45.282589 1119263 system_pods.go:137] waiting for k8s-apps to be running ...
I0127 03:02:45.486513 1119263 system_pods.go:87] 9 kube-system pods found
I0127 03:02:52.671028 1119269 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
I0127 03:02:52.671099 1119269 kubeadm.go:310] [preflight] Running pre-flight checks
I0127 03:02:52.671206 1119269 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0127 03:02:52.671380 1119269 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0127 03:02:52.671539 1119269 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0127 03:02:52.671639 1119269 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0127 03:02:52.673297 1119269 out.go:235] - Generating certificates and keys ...
I0127 03:02:52.673383 1119269 kubeadm.go:310] [certs] Using existing ca certificate authority
I0127 03:02:52.673474 1119269 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0127 03:02:52.673554 1119269 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0127 03:02:52.673609 1119269 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
I0127 03:02:52.673670 1119269 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
I0127 03:02:52.673716 1119269 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
I0127 03:02:52.673767 1119269 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
I0127 03:02:52.673816 1119269 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
I0127 03:02:52.673876 1119269 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0127 03:02:52.673954 1119269 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0127 03:02:52.673999 1119269 kubeadm.go:310] [certs] Using the existing "sa" key
I0127 03:02:52.674047 1119269 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0127 03:02:52.674108 1119269 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0127 03:02:52.674187 1119269 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0127 03:02:52.674263 1119269 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0127 03:02:52.674321 1119269 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0127 03:02:52.674367 1119269 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0127 03:02:52.674447 1119269 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0127 03:02:52.674507 1119269 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0127 03:02:52.675997 1119269 out.go:235] - Booting up control plane ...
I0127 03:02:52.676130 1119269 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0127 03:02:52.676280 1119269 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0127 03:02:52.676377 1119269 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0127 03:02:52.676517 1119269 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0127 03:02:52.676652 1119269 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0127 03:02:52.676719 1119269 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0127 03:02:52.676922 1119269 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0127 03:02:52.677082 1119269 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0127 03:02:52.677173 1119269 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001864315s
I0127 03:02:52.677287 1119269 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0127 03:02:52.677368 1119269 kubeadm.go:310] [api-check] The API server is healthy after 5.001344194s
I0127 03:02:52.677511 1119269 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0127 03:02:52.677653 1119269 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0127 03:02:52.677715 1119269 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0127 03:02:52.677867 1119269 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-717075 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0127 03:02:52.677952 1119269 kubeadm.go:310] [bootstrap-token] Using token: dptef9.zgjhm0hnxmak7ndp
I0127 03:02:52.679531 1119269 out.go:235] - Configuring RBAC rules ...
I0127 03:02:52.679681 1119269 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0127 03:02:52.679793 1119269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0127 03:02:52.680000 1119269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0127 03:02:52.680151 1119269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0127 03:02:52.680307 1119269 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0127 03:02:52.680415 1119269 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0127 03:02:52.680548 1119269 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0127 03:02:52.680611 1119269 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0127 03:02:52.680680 1119269 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0127 03:02:52.680690 1119269 kubeadm.go:310]
I0127 03:02:52.680769 1119269 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0127 03:02:52.680779 1119269 kubeadm.go:310]
I0127 03:02:52.680875 1119269 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0127 03:02:52.680886 1119269 kubeadm.go:310]
I0127 03:02:52.680922 1119269 kubeadm.go:310] mkdir -p $HOME/.kube
I0127 03:02:52.681024 1119269 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0127 03:02:52.681096 1119269 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0127 03:02:52.681106 1119269 kubeadm.go:310]
I0127 03:02:52.681192 1119269 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0127 03:02:52.681208 1119269 kubeadm.go:310]
I0127 03:02:52.681275 1119269 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0127 03:02:52.681289 1119269 kubeadm.go:310]
I0127 03:02:52.681363 1119269 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0127 03:02:52.681491 1119269 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0127 03:02:52.681562 1119269 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0127 03:02:52.681568 1119269 kubeadm.go:310]
I0127 03:02:52.681636 1119269 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0127 03:02:52.681749 1119269 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0127 03:02:52.681759 1119269 kubeadm.go:310]
I0127 03:02:52.681896 1119269 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token dptef9.zgjhm0hnxmak7ndp \
I0127 03:02:52.682053 1119269 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:0bba8d4e4b3afb129d2d18e2e045cd48b3419c300ae73ce15b73c31a6c21b1ba \
I0127 03:02:52.682085 1119269 kubeadm.go:310] --control-plane
I0127 03:02:52.682091 1119269 kubeadm.go:310]
I0127 03:02:52.682242 1119269 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0127 03:02:52.682259 1119269 kubeadm.go:310]
I0127 03:02:52.682381 1119269 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token dptef9.zgjhm0hnxmak7ndp \
I0127 03:02:52.682532 1119269 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:0bba8d4e4b3afb129d2d18e2e045cd48b3419c300ae73ce15b73c31a6c21b1ba
I0127 03:02:52.682561 1119269 cni.go:84] Creating CNI manager for ""
I0127 03:02:52.682574 1119269 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0127 03:02:52.684226 1119269 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0127 03:02:52.685352 1119269 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0127 03:02:52.697398 1119269 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0127 03:02:52.719046 1119269 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0127 03:02:52.719104 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 03:02:52.719145 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-717075 minikube.k8s.io/updated_at=2025_01_27T03_02_52_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95 minikube.k8s.io/name=default-k8s-diff-port-717075 minikube.k8s.io/primary=true
I0127 03:02:52.761799 1119269 ops.go:34] apiserver oom_adj: -16
I0127 03:02:52.952929 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 03:02:53.453841 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 03:02:53.953656 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 03:02:54.453137 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 03:02:54.953750 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 03:02:55.453823 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 03:02:55.953104 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 03:02:56.453840 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 03:02:56.953721 1119269 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0127 03:02:57.072043 1119269 kubeadm.go:1113] duration metric: took 4.352992678s to wait for elevateKubeSystemPrivileges
I0127 03:02:57.072116 1119269 kubeadm.go:394] duration metric: took 4m37.021077009s to StartCluster
I0127 03:02:57.072145 1119269 settings.go:142] acquiring lock: {Name:mkfac79776d8549aa482287d1af528efdec15d32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 03:02:57.072271 1119269 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20316-1057178/kubeconfig
I0127 03:02:57.073904 1119269 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-1057178/kubeconfig: {Name:mke4bd9fc891569e5d6830fdf173fa5043f6c0a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0127 03:02:57.074254 1119269 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.17 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0127 03:02:57.074373 1119269 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0127 03:02:57.074508 1119269 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-717075"
I0127 03:02:57.074520 1119269 config.go:182] Loaded profile config "default-k8s-diff-port-717075": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 03:02:57.074535 1119269 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-717075"
W0127 03:02:57.074544 1119269 addons.go:247] addon storage-provisioner should already be in state true
I0127 03:02:57.074540 1119269 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-717075"
I0127 03:02:57.074579 1119269 host.go:66] Checking if "default-k8s-diff-port-717075" exists ...
I0127 03:02:57.074576 1119269 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-717075"
I0127 03:02:57.074572 1119269 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-717075"
I0127 03:02:57.074588 1119269 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-717075"
I0127 03:02:57.074605 1119269 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-717075"
I0127 03:02:57.074614 1119269 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-717075"
W0127 03:02:57.074616 1119269 addons.go:247] addon dashboard should already be in state true
W0127 03:02:57.074623 1119269 addons.go:247] addon metrics-server should already be in state true
I0127 03:02:57.074653 1119269 host.go:66] Checking if "default-k8s-diff-port-717075" exists ...
I0127 03:02:57.074659 1119269 host.go:66] Checking if "default-k8s-diff-port-717075" exists ...
I0127 03:02:57.075056 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:57.075068 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:57.075068 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:57.075121 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:57.075123 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:57.075163 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:57.075267 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:57.075353 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:57.081008 1119269 out.go:177] * Verifying Kubernetes components...
I0127 03:02:57.082885 1119269 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0127 03:02:57.094206 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34785
I0127 03:02:57.094931 1119269 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:57.095746 1119269 main.go:141] libmachine: Using API Version 1
I0127 03:02:57.095766 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:57.095843 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37149
I0127 03:02:57.095963 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40171
I0127 03:02:57.096377 1119269 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:57.096485 1119269 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:57.096649 1119269 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:57.097010 1119269 main.go:141] libmachine: Using API Version 1
I0127 03:02:57.097039 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:57.097172 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:57.097228 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:57.097627 1119269 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:57.097906 1119269 main.go:141] libmachine: Using API Version 1
I0127 03:02:57.097919 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:57.098237 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:57.098286 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:57.098455 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40437
I0127 03:02:57.098935 1119269 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:57.099556 1119269 main.go:141] libmachine: Using API Version 1
I0127 03:02:57.099578 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:57.099797 1119269 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:57.100439 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:57.100480 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:57.100698 1119269 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:57.100896 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetState
I0127 03:02:57.105155 1119269 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-717075"
W0127 03:02:57.105188 1119269 addons.go:247] addon default-storageclass should already be in state true
I0127 03:02:57.105221 1119269 host.go:66] Checking if "default-k8s-diff-port-717075" exists ...
I0127 03:02:57.105609 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:57.105668 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:57.121375 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36179
I0127 03:02:57.121658 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39597
I0127 03:02:57.121901 1119269 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:57.122123 1119269 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:57.122486 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39513
I0127 03:02:57.122504 1119269 main.go:141] libmachine: Using API Version 1
I0127 03:02:57.122523 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:57.122758 1119269 main.go:141] libmachine: Using API Version 1
I0127 03:02:57.122778 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:57.122813 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35417
I0127 03:02:57.122851 1119269 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:57.122923 1119269 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:57.123171 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetState
I0127 03:02:57.123241 1119269 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:57.123868 1119269 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:57.123978 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetState
I0127 03:02:57.123990 1119269 main.go:141] libmachine: Using API Version 1
I0127 03:02:57.124007 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:57.124368 1119269 main.go:141] libmachine: Using API Version 1
I0127 03:02:57.124387 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:57.124452 1119269 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:57.124681 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetState
I0127 03:02:57.124733 1119269 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:57.125300 1119269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 03:02:57.125347 1119269 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 03:02:57.126534 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .DriverName
I0127 03:02:57.127123 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .DriverName
I0127 03:02:57.127415 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .DriverName
I0127 03:02:57.128921 1119269 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0127 03:02:57.128930 1119269 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0127 03:02:57.128931 1119269 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0127 03:02:57.130374 1119269 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0127 03:02:57.130393 1119269 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0127 03:02:57.130411 1119269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0127 03:02:57.130431 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHHostname
I0127 03:02:57.130395 1119269 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0127 03:02:57.130396 1119269 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0127 03:02:57.130621 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHHostname
I0127 03:02:57.132516 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0127 03:02:57.132532 1119269 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0127 03:02:57.132547 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHHostname
I0127 03:02:57.135860 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
I0127 03:02:57.135912 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
I0127 03:02:57.136120 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
I0127 03:02:57.136644 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:da:ad", ip: ""} in network mk-default-k8s-diff-port-717075: {Iface:virbr4 ExpiryTime:2025-01-27 03:58:09 +0000 UTC Type:0 Mac:52:54:00:22:da:ad Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:default-k8s-diff-port-717075 Clientid:01:52:54:00:22:da:ad}
I0127 03:02:57.136669 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined IP address 192.168.72.17 and MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
I0127 03:02:57.136702 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:da:ad", ip: ""} in network mk-default-k8s-diff-port-717075: {Iface:virbr4 ExpiryTime:2025-01-27 03:58:09 +0000 UTC Type:0 Mac:52:54:00:22:da:ad Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:default-k8s-diff-port-717075 Clientid:01:52:54:00:22:da:ad}
I0127 03:02:57.136736 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined IP address 192.168.72.17 and MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
I0127 03:02:57.136747 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHPort
I0127 03:02:57.136809 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:da:ad", ip: ""} in network mk-default-k8s-diff-port-717075: {Iface:virbr4 ExpiryTime:2025-01-27 03:58:09 +0000 UTC Type:0 Mac:52:54:00:22:da:ad Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:default-k8s-diff-port-717075 Clientid:01:52:54:00:22:da:ad}
I0127 03:02:57.137008 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined IP address 192.168.72.17 and MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
I0127 03:02:57.136938 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHKeyPath
I0127 03:02:57.137108 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHPort
I0127 03:02:57.137179 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHUsername
I0127 03:02:57.137309 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHKeyPath
I0127 03:02:57.137376 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHPort
I0127 03:02:57.137403 1119269 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/default-k8s-diff-port-717075/id_rsa Username:docker}
I0127 03:02:57.137589 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHKeyPath
I0127 03:02:57.137621 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHUsername
I0127 03:02:57.137794 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHUsername
I0127 03:02:57.138008 1119269 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/default-k8s-diff-port-717075/id_rsa Username:docker}
I0127 03:02:57.138010 1119269 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/default-k8s-diff-port-717075/id_rsa Username:docker}
I0127 03:02:57.152787 1119269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35355
I0127 03:02:57.153399 1119269 main.go:141] libmachine: () Calling .GetVersion
I0127 03:02:57.153967 1119269 main.go:141] libmachine: Using API Version 1
I0127 03:02:57.154002 1119269 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 03:02:57.154377 1119269 main.go:141] libmachine: () Calling .GetMachineName
I0127 03:02:57.154584 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetState
I0127 03:02:57.156381 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .DriverName
I0127 03:02:57.156603 1119269 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0127 03:02:57.156624 1119269 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0127 03:02:57.156649 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHHostname
I0127 03:02:57.159499 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
I0127 03:02:57.159944 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:da:ad", ip: ""} in network mk-default-k8s-diff-port-717075: {Iface:virbr4 ExpiryTime:2025-01-27 03:58:09 +0000 UTC Type:0 Mac:52:54:00:22:da:ad Iaid: IPaddr:192.168.72.17 Prefix:24 Hostname:default-k8s-diff-port-717075 Clientid:01:52:54:00:22:da:ad}
I0127 03:02:57.160261 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | domain default-k8s-diff-port-717075 has defined IP address 192.168.72.17 and MAC address 52:54:00:22:da:ad in network mk-default-k8s-diff-port-717075
I0127 03:02:57.160520 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHPort
I0127 03:02:57.160684 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHKeyPath
I0127 03:02:57.163248 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .GetSSHUsername
I0127 03:02:57.164348 1119269 sshutil.go:53] new ssh client: &{IP:192.168.72.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20316-1057178/.minikube/machines/default-k8s-diff-port-717075/id_rsa Username:docker}
I0127 03:02:57.378051 1119269 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0127 03:02:57.433542 1119269 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-717075" to be "Ready" ...
I0127 03:02:57.474874 1119269 node_ready.go:49] node "default-k8s-diff-port-717075" has status "Ready":"True"
I0127 03:02:57.474911 1119269 node_ready.go:38] duration metric: took 41.327465ms for node "default-k8s-diff-port-717075" to be "Ready" ...
I0127 03:02:57.474926 1119269 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 03:02:57.483255 1119269 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-htglq" in "kube-system" namespace to be "Ready" ...
I0127 03:02:57.519688 1119269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0127 03:02:57.542506 1119269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0127 03:02:57.549073 1119269 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0127 03:02:57.549102 1119269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0127 03:02:57.584535 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0127 03:02:57.584568 1119269 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0127 03:02:57.655673 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0127 03:02:57.655711 1119269 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0127 03:02:57.690996 1119269 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0127 03:02:57.691028 1119269 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0127 03:02:57.822313 1119269 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0127 03:02:57.822349 1119269 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0127 03:02:57.834363 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0127 03:02:57.834392 1119269 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0127 03:02:57.911077 1119269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0127 03:02:58.019919 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0127 03:02:58.019953 1119269 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0127 03:02:58.212111 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0127 03:02:58.212145 1119269 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0127 03:02:58.309353 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0127 03:02:58.309381 1119269 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0127 03:02:58.378582 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0127 03:02:58.378611 1119269 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0127 03:02:58.444731 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0127 03:02:58.444762 1119269 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0127 03:02:58.506703 1119269 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0127 03:02:58.506745 1119269 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0127 03:02:58.584131 1119269 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0127 03:02:58.850852 1119269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.331110115s)
I0127 03:02:58.850948 1119269 main.go:141] libmachine: Making call to close driver server
I0127 03:02:58.850973 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
I0127 03:02:58.850970 1119269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.308397522s)
I0127 03:02:58.851017 1119269 main.go:141] libmachine: Making call to close driver server
I0127 03:02:58.851054 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
I0127 03:02:58.851306 1119269 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:58.851328 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:58.851341 1119269 main.go:141] libmachine: Making call to close driver server
I0127 03:02:58.851348 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
I0127 03:02:58.851426 1119269 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:58.851444 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:58.851465 1119269 main.go:141] libmachine: Making call to close driver server
I0127 03:02:58.851476 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
I0127 03:02:58.851634 1119269 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:58.851650 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:58.851693 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | Closing plugin on server side
I0127 03:02:58.851740 1119269 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:58.851762 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | Closing plugin on server side
I0127 03:02:58.851765 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:58.886972 1119269 main.go:141] libmachine: Making call to close driver server
I0127 03:02:58.887006 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
I0127 03:02:58.887346 1119269 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:58.887369 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:59.219464 1119269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.308329693s)
I0127 03:02:59.219531 1119269 main.go:141] libmachine: Making call to close driver server
I0127 03:02:59.219552 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
I0127 03:02:59.219946 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | Closing plugin on server side
I0127 03:02:59.220003 1119269 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:59.220024 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:59.220045 1119269 main.go:141] libmachine: Making call to close driver server
I0127 03:02:59.220059 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
I0127 03:02:59.220303 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | Closing plugin on server side
I0127 03:02:59.220340 1119269 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:02:59.220349 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:02:59.220364 1119269 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-717075"
I0127 03:02:59.493877 1119269 pod_ready.go:93] pod "coredns-668d6bf9bc-htglq" in "kube-system" namespace has status "Ready":"True"
I0127 03:02:59.493919 1119269 pod_ready.go:82] duration metric: took 2.010631788s for pod "coredns-668d6bf9bc-htglq" in "kube-system" namespace to be "Ready" ...
I0127 03:02:59.493932 1119269 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-pwz9n" in "kube-system" namespace to be "Ready" ...
I0127 03:03:00.135755 1119269 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.551568283s)
I0127 03:03:00.135819 1119269 main.go:141] libmachine: Making call to close driver server
I0127 03:03:00.135831 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
I0127 03:03:00.136153 1119269 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:03:00.136171 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:03:00.136179 1119269 main.go:141] libmachine: Making call to close driver server
I0127 03:03:00.136187 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) Calling .Close
I0127 03:03:00.136181 1119269 main.go:141] libmachine: (default-k8s-diff-port-717075) DBG | Closing plugin on server side
I0127 03:03:00.136446 1119269 main.go:141] libmachine: Successfully made call to close driver server
I0127 03:03:00.136459 1119269 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 03:03:00.137984 1119269 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p default-k8s-diff-port-717075 addons enable metrics-server
I0127 03:03:00.139476 1119269 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
I0127 03:03:00.140933 1119269 addons.go:514] duration metric: took 3.06657827s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
I0127 03:03:01.501546 1119269 pod_ready.go:103] pod "coredns-668d6bf9bc-pwz9n" in "kube-system" namespace has status "Ready":"False"
I0127 03:03:04.000116 1119269 pod_ready.go:103] pod "coredns-668d6bf9bc-pwz9n" in "kube-system" namespace has status "Ready":"False"
I0127 03:03:05.002068 1119269 pod_ready.go:93] pod "coredns-668d6bf9bc-pwz9n" in "kube-system" namespace has status "Ready":"True"
I0127 03:03:05.002134 1119269 pod_ready.go:82] duration metric: took 5.508188953s for pod "coredns-668d6bf9bc-pwz9n" in "kube-system" namespace to be "Ready" ...
I0127 03:03:05.002149 1119269 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
I0127 03:03:05.007136 1119269 pod_ready.go:93] pod "etcd-default-k8s-diff-port-717075" in "kube-system" namespace has status "Ready":"True"
I0127 03:03:05.007163 1119269 pod_ready.go:82] duration metric: took 5.003743ms for pod "etcd-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
I0127 03:03:05.007173 1119269 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
I0127 03:03:06.013821 1119269 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-717075" in "kube-system" namespace has status "Ready":"True"
I0127 03:03:06.013847 1119269 pod_ready.go:82] duration metric: took 1.006667196s for pod "kube-apiserver-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
I0127 03:03:06.013860 1119269 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
I0127 03:03:06.018661 1119269 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-717075" in "kube-system" namespace has status "Ready":"True"
I0127 03:03:06.018683 1119269 pod_ready.go:82] duration metric: took 4.814763ms for pod "kube-controller-manager-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
I0127 03:03:06.018694 1119269 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nlkhv" in "kube-system" namespace to be "Ready" ...
I0127 03:03:06.022482 1119269 pod_ready.go:93] pod "kube-proxy-nlkhv" in "kube-system" namespace has status "Ready":"True"
I0127 03:03:06.022500 1119269 pod_ready.go:82] duration metric: took 3.79842ms for pod "kube-proxy-nlkhv" in "kube-system" namespace to be "Ready" ...
I0127 03:03:06.022512 1119269 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
I0127 03:03:06.197960 1119269 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-717075" in "kube-system" namespace has status "Ready":"True"
I0127 03:03:06.197986 1119269 pod_ready.go:82] duration metric: took 175.467759ms for pod "kube-scheduler-default-k8s-diff-port-717075" in "kube-system" namespace to be "Ready" ...
I0127 03:03:06.197995 1119269 pod_ready.go:39] duration metric: took 8.723057571s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0127 03:03:06.198012 1119269 api_server.go:52] waiting for apiserver process to appear ...
I0127 03:03:06.198073 1119269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 03:03:06.215210 1119269 api_server.go:72] duration metric: took 9.140900628s to wait for apiserver process to appear ...
I0127 03:03:06.215249 1119269 api_server.go:88] waiting for apiserver healthz status ...
I0127 03:03:06.215273 1119269 api_server.go:253] Checking apiserver healthz at https://192.168.72.17:8444/healthz ...
I0127 03:03:06.219951 1119269 api_server.go:279] https://192.168.72.17:8444/healthz returned 200:
ok
I0127 03:03:06.220901 1119269 api_server.go:141] control plane version: v1.32.1
I0127 03:03:06.220922 1119269 api_server.go:131] duration metric: took 5.666132ms to wait for apiserver health ...
I0127 03:03:06.220929 1119269 system_pods.go:43] waiting for kube-system pods to appear ...
I0127 03:03:06.402128 1119269 system_pods.go:59] 9 kube-system pods found
I0127 03:03:06.402165 1119269 system_pods.go:61] "coredns-668d6bf9bc-htglq" [2d4500a2-7bc9-4c25-af55-3c20257065c2] Running
I0127 03:03:06.402172 1119269 system_pods.go:61] "coredns-668d6bf9bc-pwz9n" [cf6b7c7c-59eb-4901-88ba-a6e4556ddd4c] Running
I0127 03:03:06.402177 1119269 system_pods.go:61] "etcd-default-k8s-diff-port-717075" [50fac615-6926-4023-8467-fa0c3fec39b2] Running
I0127 03:03:06.402181 1119269 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-717075" [f86307a0-5994-4178-af8a-43613ed2bd63] Running
I0127 03:03:06.402186 1119269 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-717075" [543f1b9a-da5a-4963-adc0-3bb2c88f2de0] Running
I0127 03:03:06.402191 1119269 system_pods.go:61] "kube-proxy-nlkhv" [57c52d4f-937f-4fc8-98dd-9aa0531f8d17] Running
I0127 03:03:06.402197 1119269 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-717075" [bb54f953-7c1f-4ce8-a590-7d029dcfea24] Running
I0127 03:03:06.402205 1119269 system_pods.go:61] "metrics-server-f79f97bbb-fthnn" [fb8e4d08-fb1f-49a5-8984-44e975174502] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0127 03:03:06.402211 1119269 system_pods.go:61] "storage-provisioner" [0a7c6b15-4ec5-46cf-8f6e-d98c292af196] Running
I0127 03:03:06.402225 1119269 system_pods.go:74] duration metric: took 181.288367ms to wait for pod list to return data ...
I0127 03:03:06.402236 1119269 default_sa.go:34] waiting for default service account to be created ...
I0127 03:03:06.598976 1119269 default_sa.go:45] found service account: "default"
I0127 03:03:06.599007 1119269 default_sa.go:55] duration metric: took 196.76041ms for default service account to be created ...
I0127 03:03:06.599017 1119269 system_pods.go:137] waiting for k8s-apps to be running ...
I0127 03:03:06.802139 1119269 system_pods.go:87] 9 kube-system pods found
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
52be15a103b51 523cad1a4df73 29 seconds ago Exited dashboard-metrics-scraper 9 712a724f859bb dashboard-metrics-scraper-86c6bf9756-k2z8t
c623878236cab 07655ddf2eebe 21 minutes ago Running kubernetes-dashboard 0 33a2b97eec49d kubernetes-dashboard-7779f9b69b-7zlvr
c1d994b589453 6e38f40d628db 21 minutes ago Running storage-provisioner 0 1a96975049b69 storage-provisioner
d8466597996e8 c69fa2e9cbf5f 21 minutes ago Running coredns 0 b9bef54853881 coredns-668d6bf9bc-86j6q
a0b17beaa8251 c69fa2e9cbf5f 21 minutes ago Running coredns 0 e1ea225e4e626 coredns-668d6bf9bc-fk8cw
89845d408bed3 e29f9c7391fd9 21 minutes ago Running kube-proxy 0 85bbda280c0ca kube-proxy-45pz6
f8dd73f608c82 2b0d6572d062c 21 minutes ago Running kube-scheduler 2 bae59ee898b44 kube-scheduler-no-preload-887091
b8952681ec21a a9e7e6b294baf 21 minutes ago Running etcd 2 9b5923edae55c etcd-no-preload-887091
062301b551bd4 019ee182b58e2 21 minutes ago Running kube-controller-manager 2 ceef7cf796b46 kube-controller-manager-no-preload-887091
786778ce9f4d3 95c0bda56fc4d 21 minutes ago Running kube-apiserver 2 71b51ccde95cd kube-apiserver-no-preload-887091
==> containerd <==
Jan 27 03:18:07 no-preload-887091 containerd[555]: time="2025-01-27T03:18:07.396149697Z" level=info msg="CreateContainer within sandbox \"712a724f859bbef28a8fab7b018ed3fc9cd01252e3a35c5d1f53dd383339dada\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,} returns container id \"b3ef1d0d336815f2ec058d66dd159f0f78f0e1e2f674722cc20ffdaff2d96a04\""
Jan 27 03:18:07 no-preload-887091 containerd[555]: time="2025-01-27T03:18:07.397548944Z" level=info msg="StartContainer for \"b3ef1d0d336815f2ec058d66dd159f0f78f0e1e2f674722cc20ffdaff2d96a04\""
Jan 27 03:18:07 no-preload-887091 containerd[555]: time="2025-01-27T03:18:07.495636288Z" level=info msg="StartContainer for \"b3ef1d0d336815f2ec058d66dd159f0f78f0e1e2f674722cc20ffdaff2d96a04\" returns successfully"
Jan 27 03:18:07 no-preload-887091 containerd[555]: time="2025-01-27T03:18:07.542203640Z" level=info msg="shim disconnected" id=b3ef1d0d336815f2ec058d66dd159f0f78f0e1e2f674722cc20ffdaff2d96a04 namespace=k8s.io
Jan 27 03:18:07 no-preload-887091 containerd[555]: time="2025-01-27T03:18:07.542414462Z" level=warning msg="cleaning up after shim disconnected" id=b3ef1d0d336815f2ec058d66dd159f0f78f0e1e2f674722cc20ffdaff2d96a04 namespace=k8s.io
Jan 27 03:18:07 no-preload-887091 containerd[555]: time="2025-01-27T03:18:07.542425633Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 27 03:18:08 no-preload-887091 containerd[555]: time="2025-01-27T03:18:08.374948584Z" level=info msg="RemoveContainer for \"7afdf5ec91198c1839ee48b40244e47f8195a3771b75b64eafca838b916045db\""
Jan 27 03:18:08 no-preload-887091 containerd[555]: time="2025-01-27T03:18:08.382659346Z" level=info msg="RemoveContainer for \"7afdf5ec91198c1839ee48b40244e47f8195a3771b75b64eafca838b916045db\" returns successfully"
Jan 27 03:18:16 no-preload-887091 containerd[555]: time="2025-01-27T03:18:16.365269759Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 27 03:18:16 no-preload-887091 containerd[555]: time="2025-01-27T03:18:16.376975868Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
Jan 27 03:18:16 no-preload-887091 containerd[555]: time="2025-01-27T03:18:16.378939196Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
Jan 27 03:18:16 no-preload-887091 containerd[555]: time="2025-01-27T03:18:16.379035528Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Jan 27 03:23:15 no-preload-887091 containerd[555]: time="2025-01-27T03:23:15.368068895Z" level=info msg="CreateContainer within sandbox \"712a724f859bbef28a8fab7b018ed3fc9cd01252e3a35c5d1f53dd383339dada\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,}"
Jan 27 03:23:15 no-preload-887091 containerd[555]: time="2025-01-27T03:23:15.394110770Z" level=info msg="CreateContainer within sandbox \"712a724f859bbef28a8fab7b018ed3fc9cd01252e3a35c5d1f53dd383339dada\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,} returns container id \"52be15a103b51a639a556bc16dbc4db1a6800617b88d20de145e7d18a99acecb\""
Jan 27 03:23:15 no-preload-887091 containerd[555]: time="2025-01-27T03:23:15.395118548Z" level=info msg="StartContainer for \"52be15a103b51a639a556bc16dbc4db1a6800617b88d20de145e7d18a99acecb\""
Jan 27 03:23:15 no-preload-887091 containerd[555]: time="2025-01-27T03:23:15.492039113Z" level=info msg="StartContainer for \"52be15a103b51a639a556bc16dbc4db1a6800617b88d20de145e7d18a99acecb\" returns successfully"
Jan 27 03:23:15 no-preload-887091 containerd[555]: time="2025-01-27T03:23:15.545931308Z" level=info msg="shim disconnected" id=52be15a103b51a639a556bc16dbc4db1a6800617b88d20de145e7d18a99acecb namespace=k8s.io
Jan 27 03:23:15 no-preload-887091 containerd[555]: time="2025-01-27T03:23:15.546043278Z" level=warning msg="cleaning up after shim disconnected" id=52be15a103b51a639a556bc16dbc4db1a6800617b88d20de145e7d18a99acecb namespace=k8s.io
Jan 27 03:23:15 no-preload-887091 containerd[555]: time="2025-01-27T03:23:15.546054640Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 27 03:23:16 no-preload-887091 containerd[555]: time="2025-01-27T03:23:16.141497054Z" level=info msg="RemoveContainer for \"b3ef1d0d336815f2ec058d66dd159f0f78f0e1e2f674722cc20ffdaff2d96a04\""
Jan 27 03:23:16 no-preload-887091 containerd[555]: time="2025-01-27T03:23:16.148607925Z" level=info msg="RemoveContainer for \"b3ef1d0d336815f2ec058d66dd159f0f78f0e1e2f674722cc20ffdaff2d96a04\" returns successfully"
Jan 27 03:23:19 no-preload-887091 containerd[555]: time="2025-01-27T03:23:19.365333645Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jan 27 03:23:19 no-preload-887091 containerd[555]: time="2025-01-27T03:23:19.374785620Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
Jan 27 03:23:19 no-preload-887091 containerd[555]: time="2025-01-27T03:23:19.377055116Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
Jan 27 03:23:19 no-preload-887091 containerd[555]: time="2025-01-27T03:23:19.377132516Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
==> coredns [a0b17beaa8251fabd82fb44dc88123c6eacacd5d8fd174979a3a7849a205fc81] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
CoreDNS-1.11.3
linux/amd64, go1.21.11, a6338e9
==> coredns [d8466597996e84b368a8c1d42dd8e6e8e25d177a043d482029dde1ea6da57bc8] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
CoreDNS-1.11.3
linux/amd64, go1.21.11, a6338e9
==> describe nodes <==
Name: no-preload-887091
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=no-preload-887091
kubernetes.io/os=linux
minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95
minikube.k8s.io/name=no-preload-887091
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_01_27T03_02_12_0700
minikube.k8s.io/version=v1.35.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 27 Jan 2025 03:02:08 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: no-preload-887091
AcquireTime: <unset>
RenewTime: Mon, 27 Jan 2025 03:23:39 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 27 Jan 2025 03:20:52 +0000 Mon, 27 Jan 2025 03:02:06 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 27 Jan 2025 03:20:52 +0000 Mon, 27 Jan 2025 03:02:06 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 27 Jan 2025 03:20:52 +0000 Mon, 27 Jan 2025 03:02:06 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 27 Jan 2025 03:20:52 +0000 Mon, 27 Jan 2025 03:02:08 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.61.201
Hostname: no-preload-887091
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164184Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164184Ki
pods: 110
System Info:
Machine ID: b5097775ecaf41659f7fab7087aa51ad
System UUID: b5097775-ecaf-4165-9f7f-ab7087aa51ad
Boot ID: b04cfcf9-a4ff-4126-923b-98e2b7343e1f
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.7.23
Kubelet Version: v1.32.1
Kube-Proxy Version: v1.32.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (11 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-668d6bf9bc-86j6q 100m (5%) 0 (0%) 70Mi (3%) 170Mi (8%) 21m
kube-system coredns-668d6bf9bc-fk8cw 100m (5%) 0 (0%) 70Mi (3%) 170Mi (8%) 21m
kube-system etcd-no-preload-887091 100m (5%) 0 (0%) 100Mi (4%) 0 (0%) 21m
kube-system kube-apiserver-no-preload-887091 250m (12%) 0 (0%) 0 (0%) 0 (0%) 21m
kube-system kube-controller-manager-no-preload-887091 200m (10%) 0 (0%) 0 (0%) 0 (0%) 21m
kube-system kube-proxy-45pz6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 21m
kube-system kube-scheduler-no-preload-887091 100m (5%) 0 (0%) 0 (0%) 0 (0%) 21m
kube-system metrics-server-f79f97bbb-vshg4 100m (5%) 0 (0%) 200Mi (9%) 0 (0%) 21m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 21m
kubernetes-dashboard dashboard-metrics-scraper-86c6bf9756-k2z8t 0 (0%) 0 (0%) 0 (0%) 0 (0%) 21m
kubernetes-dashboard kubernetes-dashboard-7779f9b69b-7zlvr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 21m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 0 (0%)
memory 440Mi (20%) 340Mi (16%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 21m kube-proxy
Normal NodeAllocatableEnforced 21m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 21m (x8 over 21m) kubelet Node no-preload-887091 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 21m (x8 over 21m) kubelet Node no-preload-887091 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 21m (x7 over 21m) kubelet Node no-preload-887091 status is now: NodeHasSufficientPID
Normal Starting 21m kubelet Starting kubelet.
Normal NodeAllocatableEnforced 21m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 21m kubelet Node no-preload-887091 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 21m kubelet Node no-preload-887091 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 21m kubelet Node no-preload-887091 status is now: NodeHasSufficientPID
Normal RegisteredNode 21m node-controller Node no-preload-887091 event: Registered Node no-preload-887091 in Controller
==> dmesg <==
[ +0.053207] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
[ +0.041788] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +4.942372] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +2.847394] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
[ +1.666860] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[ +7.235944] systemd-fstab-generator[478]: Ignoring "noauto" option for root device
[ +0.066561] kauditd_printk_skb: 1 callbacks suppressed
[ +0.079419] systemd-fstab-generator[490]: Ignoring "noauto" option for root device
[ +0.155842] systemd-fstab-generator[504]: Ignoring "noauto" option for root device
[ +0.153234] systemd-fstab-generator[516]: Ignoring "noauto" option for root device
[ +0.284968] systemd-fstab-generator[547]: Ignoring "noauto" option for root device
[ +1.272885] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
[ +2.240619] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
[ +0.877564] kauditd_printk_skb: 225 callbacks suppressed
[ +5.546711] kauditd_printk_skb: 74 callbacks suppressed
[ +11.482872] kauditd_printk_skb: 48 callbacks suppressed
[Jan27 03:02] systemd-fstab-generator[3087]: Ignoring "noauto" option for root device
[ +6.582425] systemd-fstab-generator[3461]: Ignoring "noauto" option for root device
[ +0.115221] kauditd_printk_skb: 87 callbacks suppressed
[ +4.900139] systemd-fstab-generator[3562]: Ignoring "noauto" option for root device
[ +0.100609] kauditd_printk_skb: 28 callbacks suppressed
[ +8.497695] kauditd_printk_skb: 96 callbacks suppressed
[ +5.097265] kauditd_printk_skb: 8 callbacks suppressed
==> etcd [b8952681ec21a9a0b2eaeeb1cf22e6a83ba35d8149bc0bcc150b663e15c96e8b] <==
{"level":"info","ts":"2025-01-27T03:02:06.713943Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f000dedbcae268ef elected leader f000dedbcae268ef at term 2"}
{"level":"info","ts":"2025-01-27T03:02:06.718958Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2025-01-27T03:02:06.722079Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"f000dedbcae268ef","local-member-attributes":"{Name:no-preload-887091 ClientURLs:[https://192.168.61.201:2379]}","request-path":"/0/members/f000dedbcae268ef/attributes","cluster-id":"334af0e9e11f35f3","publish-timeout":"7s"}
{"level":"info","ts":"2025-01-27T03:02:06.722552Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-01-27T03:02:06.723225Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-01-27T03:02:06.723394Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"334af0e9e11f35f3","local-member-id":"f000dedbcae268ef","cluster-version":"3.5"}
{"level":"info","ts":"2025-01-27T03:02:06.728916Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2025-01-27T03:02:06.730812Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2025-01-27T03:02:06.723470Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2025-01-27T03:02:06.723983Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2025-01-27T03:02:06.728489Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2025-01-27T03:02:06.732162Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.201:2379"}
{"level":"info","ts":"2025-01-27T03:02:06.737840Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"info","ts":"2025-01-27T03:02:06.731286Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2025-01-27T03:02:27.106125Z","caller":"traceutil/trace.go:171","msg":"trace[1771517659] transaction","detail":"{read_only:false; response_revision:547; number_of_response:1; }","duration":"106.806683ms","start":"2025-01-27T03:02:26.998039Z","end":"2025-01-27T03:02:27.104846Z","steps":["trace[1771517659] 'process raft request' (duration: 106.581629ms)"],"step_count":1}
{"level":"info","ts":"2025-01-27T03:02:28.117947Z","caller":"traceutil/trace.go:171","msg":"trace[396997179] transaction","detail":"{read_only:false; response_revision:549; number_of_response:1; }","duration":"113.836092ms","start":"2025-01-27T03:02:28.004090Z","end":"2025-01-27T03:02:28.117926Z","steps":["trace[396997179] 'process raft request' (duration: 113.051193ms)"],"step_count":1}
{"level":"info","ts":"2025-01-27T03:12:06.796875Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":879}
{"level":"info","ts":"2025-01-27T03:12:06.840173Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":879,"took":"41.136016ms","hash":416233361,"current-db-size-bytes":3092480,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":3092480,"current-db-size-in-use":"3.1 MB"}
{"level":"info","ts":"2025-01-27T03:12:06.840426Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":416233361,"revision":879,"compact-revision":-1}
{"level":"info","ts":"2025-01-27T03:17:06.807824Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1130}
{"level":"info","ts":"2025-01-27T03:17:06.812599Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1130,"took":"3.999871ms","hash":3786764128,"current-db-size-bytes":3092480,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":1753088,"current-db-size-in-use":"1.8 MB"}
{"level":"info","ts":"2025-01-27T03:17:06.812812Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3786764128,"revision":1130,"compact-revision":879}
{"level":"info","ts":"2025-01-27T03:22:06.817514Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1381}
{"level":"info","ts":"2025-01-27T03:22:06.823289Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1381,"took":"4.747701ms","hash":1480603789,"current-db-size-bytes":3092480,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":1798144,"current-db-size-in-use":"1.8 MB"}
{"level":"info","ts":"2025-01-27T03:22:06.823334Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1480603789,"revision":1381,"compact-revision":1130}
==> kernel <==
03:23:45 up 26 min, 0 users, load average: 0.18, 0.22, 0.24
Linux no-preload-887091 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2023.02.9"
==> kube-apiserver [786778ce9f4d324d0b43adbaad49fef2d4cef26a7b57db69061e9a3a8fa8872e] <==
I0127 03:20:09.513140 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0127 03:20:09.514265 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0127 03:22:08.511486 1 handler_proxy.go:99] no RequestInfo found in the context
E0127 03:22:08.511866 1 controller.go:146] "Unhandled Error" err=<
Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
W0127 03:22:09.513907 1 handler_proxy.go:99] no RequestInfo found in the context
W0127 03:22:09.513958 1 handler_proxy.go:99] no RequestInfo found in the context
E0127 03:22:09.514411 1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
E0127 03:22:09.514552 1 controller.go:102] "Unhandled Error" err=<
loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
I0127 03:22:09.516214 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0127 03:22:09.516526 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0127 03:23:09.516858 1 handler_proxy.go:99] no RequestInfo found in the context
W0127 03:23:09.516860 1 handler_proxy.go:99] no RequestInfo found in the context
E0127 03:23:09.517061 1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
E0127 03:23:09.517204 1 controller.go:102] "Unhandled Error" err=<
loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
I0127 03:23:09.518293 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0127 03:23:09.518298 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
==> kube-controller-manager [062301b551bd4f61224a1535e944d5ec7e78ab64d71c01bd6d07c61175163036] <==
E0127 03:18:45.300335 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 03:18:45.353548 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0127 03:19:15.307525 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 03:19:15.362462 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0127 03:19:45.314342 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 03:19:45.371696 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0127 03:20:15.322368 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 03:20:15.380648 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0127 03:20:45.330972 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 03:20:45.389628 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
I0127 03:20:52.869111 1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-887091"
E0127 03:21:15.337527 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 03:21:15.398190 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0127 03:21:45.344460 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 03:21:45.406610 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0127 03:22:15.351843 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 03:22:15.414656 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0127 03:22:45.359327 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 03:22:45.422507 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
E0127 03:23:15.375909 1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
I0127 03:23:15.430465 1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
I0127 03:23:16.162323 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="221.981µs"
I0127 03:23:17.160429 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="76.869µs"
I0127 03:23:30.395234 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="134.715µs"
I0127 03:23:44.381085 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="128.889µs"
==> kube-proxy [89845d408bed3c7d6dfe76f5d2117ad0973f004f9be8c7e57c0c81bfcbcc9a81] <==
add table ip kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^
>
E0127 03:02:16.908197 1 proxier.go:733] "Error cleaning up nftables rules" err=<
could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
add table ip6 kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^^
>
I0127 03:02:16.922602 1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.61.201"]
E0127 03:02:16.922695 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0127 03:02:17.023472 1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
I0127 03:02:17.023523 1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0127 03:02:17.023550 1 server_linux.go:170] "Using iptables Proxier"
I0127 03:02:17.026808 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0127 03:02:17.027195 1 server.go:497] "Version info" version="v1.32.1"
I0127 03:02:17.027232 1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0127 03:02:17.029073 1 config.go:199] "Starting service config controller"
I0127 03:02:17.029144 1 shared_informer.go:313] Waiting for caches to sync for service config
I0127 03:02:17.029180 1 config.go:105] "Starting endpoint slice config controller"
I0127 03:02:17.029185 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0127 03:02:17.033250 1 config.go:329] "Starting node config controller"
I0127 03:02:17.033262 1 shared_informer.go:313] Waiting for caches to sync for node config
I0127 03:02:17.130837 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0127 03:02:17.130865 1 shared_informer.go:320] Caches are synced for service config
I0127 03:02:17.136859 1 shared_informer.go:320] Caches are synced for node config
==> kube-scheduler [f8dd73f608c8272c885aecde8660fc054bde10b8e03b7cda7706a4072124259e] <==
W0127 03:02:08.520083 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0127 03:02:08.520593 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0127 03:02:08.520290 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0127 03:02:08.520693 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0127 03:02:08.520928 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0127 03:02:08.521700 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0127 03:02:09.441221 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0127 03:02:09.441392 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0127 03:02:09.450341 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
E0127 03:02:09.450419 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0127 03:02:09.470171 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0127 03:02:09.470449 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0127 03:02:09.483004 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0127 03:02:09.483079 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
W0127 03:02:09.532058 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0127 03:02:09.532140 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0127 03:02:09.609182 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0127 03:02:09.609254 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0127 03:02:09.636110 1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0127 03:02:09.636185 1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
W0127 03:02:09.797334 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0127 03:02:09.797403 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0127 03:02:09.861565 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0127 03:02:09.861863 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
I0127 03:02:12.802108 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Jan 27 03:22:54 no-preload-887091 kubelet[3469]: E0127 03:22:54.365547 3469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-vshg4" podUID="33ae36ed-d8a4-4d60-bcd0-1becf2d490bc"
Jan 27 03:23:03 no-preload-887091 kubelet[3469]: I0127 03:23:03.363620 3469 scope.go:117] "RemoveContainer" containerID="b3ef1d0d336815f2ec058d66dd159f0f78f0e1e2f674722cc20ffdaff2d96a04"
Jan 27 03:23:03 no-preload-887091 kubelet[3469]: E0127 03:23:03.364611 3469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-k2z8t_kubernetes-dashboard(8bac67fc-9bda-4ec5-99f6-30df6d057894)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-k2z8t" podUID="8bac67fc-9bda-4ec5-99f6-30df6d057894"
Jan 27 03:23:08 no-preload-887091 kubelet[3469]: E0127 03:23:08.365587 3469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-vshg4" podUID="33ae36ed-d8a4-4d60-bcd0-1becf2d490bc"
Jan 27 03:23:11 no-preload-887091 kubelet[3469]: E0127 03:23:11.445400 3469 iptables.go:577] "Could not set up iptables canary" err=<
Jan 27 03:23:11 no-preload-887091 kubelet[3469]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
Jan 27 03:23:11 no-preload-887091 kubelet[3469]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Jan 27 03:23:11 no-preload-887091 kubelet[3469]: Perhaps ip6tables or your kernel needs to be upgraded.
Jan 27 03:23:11 no-preload-887091 kubelet[3469]: > table="nat" chain="KUBE-KUBELET-CANARY"
Jan 27 03:23:15 no-preload-887091 kubelet[3469]: I0127 03:23:15.364374 3469 scope.go:117] "RemoveContainer" containerID="b3ef1d0d336815f2ec058d66dd159f0f78f0e1e2f674722cc20ffdaff2d96a04"
Jan 27 03:23:16 no-preload-887091 kubelet[3469]: I0127 03:23:16.138723 3469 scope.go:117] "RemoveContainer" containerID="b3ef1d0d336815f2ec058d66dd159f0f78f0e1e2f674722cc20ffdaff2d96a04"
Jan 27 03:23:16 no-preload-887091 kubelet[3469]: I0127 03:23:16.138917 3469 scope.go:117] "RemoveContainer" containerID="52be15a103b51a639a556bc16dbc4db1a6800617b88d20de145e7d18a99acecb"
Jan 27 03:23:16 no-preload-887091 kubelet[3469]: E0127 03:23:16.139071 3469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-k2z8t_kubernetes-dashboard(8bac67fc-9bda-4ec5-99f6-30df6d057894)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-k2z8t" podUID="8bac67fc-9bda-4ec5-99f6-30df6d057894"
Jan 27 03:23:17 no-preload-887091 kubelet[3469]: I0127 03:23:17.143068 3469 scope.go:117] "RemoveContainer" containerID="52be15a103b51a639a556bc16dbc4db1a6800617b88d20de145e7d18a99acecb"
Jan 27 03:23:17 no-preload-887091 kubelet[3469]: E0127 03:23:17.143227 3469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-k2z8t_kubernetes-dashboard(8bac67fc-9bda-4ec5-99f6-30df6d057894)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-k2z8t" podUID="8bac67fc-9bda-4ec5-99f6-30df6d057894"
Jan 27 03:23:19 no-preload-887091 kubelet[3469]: E0127 03:23:19.377482 3469 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
Jan 27 03:23:19 no-preload-887091 kubelet[3469]: E0127 03:23:19.377595 3469 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
Jan 27 03:23:19 no-preload-887091 kubelet[3469]: E0127 03:23:19.378034 3469 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zhrmt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:
nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdi
n:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-f79f97bbb-vshg4_kube-system(33ae36ed-d8a4-4d60-bcd0-1becf2d490bc): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
Jan 27 03:23:19 no-preload-887091 kubelet[3469]: E0127 03:23:19.379404 3469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-vshg4" podUID="33ae36ed-d8a4-4d60-bcd0-1becf2d490bc"
Jan 27 03:23:30 no-preload-887091 kubelet[3469]: E0127 03:23:30.372526 3469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-vshg4" podUID="33ae36ed-d8a4-4d60-bcd0-1becf2d490bc"
Jan 27 03:23:32 no-preload-887091 kubelet[3469]: I0127 03:23:32.364044 3469 scope.go:117] "RemoveContainer" containerID="52be15a103b51a639a556bc16dbc4db1a6800617b88d20de145e7d18a99acecb"
Jan 27 03:23:32 no-preload-887091 kubelet[3469]: E0127 03:23:32.364699 3469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-k2z8t_kubernetes-dashboard(8bac67fc-9bda-4ec5-99f6-30df6d057894)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-k2z8t" podUID="8bac67fc-9bda-4ec5-99f6-30df6d057894"
Jan 27 03:23:44 no-preload-887091 kubelet[3469]: E0127 03:23:44.367024 3469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-vshg4" podUID="33ae36ed-d8a4-4d60-bcd0-1becf2d490bc"
Jan 27 03:23:45 no-preload-887091 kubelet[3469]: I0127 03:23:45.363452 3469 scope.go:117] "RemoveContainer" containerID="52be15a103b51a639a556bc16dbc4db1a6800617b88d20de145e7d18a99acecb"
Jan 27 03:23:45 no-preload-887091 kubelet[3469]: E0127 03:23:45.363671 3469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-k2z8t_kubernetes-dashboard(8bac67fc-9bda-4ec5-99f6-30df6d057894)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-k2z8t" podUID="8bac67fc-9bda-4ec5-99f6-30df6d057894"
==> kubernetes-dashboard [c623878236cab2cc3807df982c4d6fbddf7c3bf9d48f30537d07db4a6468f489] <==
2025/01/27 03:11:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 03:12:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 03:12:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 03:13:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 03:13:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 03:14:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 03:14:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 03:15:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 03:15:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 03:16:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 03:16:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 03:17:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 03:17:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 03:18:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 03:18:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 03:19:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 03:19:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 03:20:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 03:20:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 03:21:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 03:21:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 03:22:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 03:22:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 03:23:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/01/27 03:23:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
==> storage-provisioner [c1d994b589453b0f758481f1aed5401b976f9d1f1cdc2ece1e8d8640802a2072] <==
I0127 03:02:18.665351 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0127 03:02:18.685447 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0127 03:02:18.688177 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0127 03:02:18.734492 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"85bd6e38-3014-43f5-8832-6e12e3bf9ec7", APIVersion:"v1", ResourceVersion:"438", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-887091_0f751520-7a64-4ee8-8e99-1d594fe7dd01 became leader
I0127 03:02:18.739328 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0127 03:02:18.739719 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-887091_0f751520-7a64-4ee8-8e99-1d594fe7dd01!
I0127 03:02:18.840276 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-887091_0f751520-7a64-4ee8-8e99-1d594fe7dd01!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-887091 -n no-preload-887091
helpers_test.go:261: (dbg) Run: kubectl --context no-preload-887091 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-vshg4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context no-preload-887091 describe pod metrics-server-f79f97bbb-vshg4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-887091 describe pod metrics-server-f79f97bbb-vshg4: exit status 1 (73.556379ms)
** stderr **
Error from server (NotFound): pods "metrics-server-f79f97bbb-vshg4" not found
** /stderr **
helpers_test.go:279: kubectl --context no-preload-887091 describe pod metrics-server-f79f97bbb-vshg4: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (1588.39s)